Machine learning systems trained to minimize prediction error may often exhibit discriminatory behavior based on sensitive characteristics such as race and gender. One reason could be due to historical bias in the data.

Copyright by www.analyticsinsights.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningThe adoption of Artificial Intelligence is gaining momentum, but the fairness of the algorithmic structure is heavily scrutinized by the federal authorities. Despite many efforts made by the organizations to keep their AI-services and solutions fair, the permeating and pre-existing biases in AI has become challenging in recent years. Big tech organizations such as Facebook, Google, Amazon and Twitter amongst others have faced the wrath of federal agencies, over the recent months.

Owing to the death of George Floyd and the #blacklivesmatter movement, the organizations have become vigilant regarding the operational framework of their AI. With federal, national and international agencies constantly pointing at the discriminatory algorithms, the tech start-ups and organizations are struggling to make their AI-solutions fair.

But how can organizations keep clear from deploying discriminatory algorithms? What solutions will thwart such biases? The legal and statistical laws, articulated by the federal agencies to a large extent help in quelling down the algorithmic biases. For example, the existing legal standards in the laws like the Equal Credit Opportunity Act, the Civil Rights Act and the Fair Housing Act and other chartered acts, alleviate the possibility of such biases.

Moreover, the effectiveness of these standards depend upon the nature of algorithmic discrimination that organizations are subjected to. Currently, organizations are faced by two types of discriminatory framework, which is either intentional or unintentional. These are known as Disparate Treatment and Disparate Impact respectively.

Disparate Treatment is intentional employment discrimination with the highest legal penalties. Organizations must avoid getting engaged with such discrimination while adopting AI. Moreover, by analyzing the record of employee behavior, disparate treatment can be avoided.

Disparate Impact, also the unintentional discrimination occurs when policies, practices, rules or other systems that appear to be a neutral result in a disproportionate impact. For example, certain test results eliminate minority applicants unintentionally or disproportionately is Disparate Impact.

Disparate Impact is heavily influenced by the inequalities of society, and it becomes extremely difficult to avoid them as it exists in almost all areas of the societal framework. Unfortunately, organizations do not have a specific solution that can aid in immediate rectification of disparate impact. The tenants of disparate impact are so deeply engraved that identifying them becomes tedious, and often organizations do not want to indulge in. For example, there is no proper definition of ‘fairness’ in society. The word is discriminatory in terms of racial context, but in the organizational set up it signifies accuracy. These two concepts, along with two dozen more, complicate the process of algorithmic training.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Additionally, a Google blog explains the fairness in machine learning systems to be derived from the lending problem. Hansa Srinivasan, Software Engineer, Google Research states, “This problem is a highly simplified and stylized representation of the lending process, where we focus on a single feedback loop in order to isolate its effects and study it in detail. In this problem formulation, the probability that individual applicants will pay back a loan is a function of their credit score. These applicants also belong to one of an arbitrary number of groups, with their group membership observable by the lending bank.”

A paper named “Delayed Impact of Fair Machine Learning” by the Berkeley Artificial Intelligence Research points that machine learning systems trained to minimize prediction error may often exhibit discriminatory behavior based on sensitive characteristics such as race and gender. Lydia T. Liu, the lead researcher and the author of the paper states that “One reason could be due to historical bias in the data. In various application domains including lending, hiring, criminal justice, and advertising, machine learning has been criticized for its potential to harm historically underrepresented or disadvantaged groups.”

The researchers and Statisticians have formulated many methodologies that could abide by the legal standards. One such methodology proving comparatively effective while dealing with algorithm discrimination is 80% rule. Formulated in the year 1978, by EEOC, Department Of Labor, Department of Justice, and the Civil Service Commission, it setups guidelines for Employee Selection Procedures. […]

Read More www.analyticsinsights.com