Human societies are extremely complex. The cultural, racial and geographical differences around the globe and the lack of curated data make ‘fairness’ in technology a huge challenge.

Copyright by analyticsindiamag.com

 

SwissCognitiveNow, in an attempt to track the long term societal impacts of artificial intelligence, Google researchers recently released a machine learning fairness gym. They have done this by using Google’s OpenAI Gym.

 

Testing Fairness Using OpenAI Gym

OpenAI’s Gym is a toolkit for developing and comparing reinforcement learning algorithms and is compatible with any numerical computation library, such as TensorFlow or Theano.

The gym library is a collection of test problems — environments — that one can use to work out reinforcement learning algorithms. Google researchers have used this platform to build their own fairness gym.

To explain how bias creeps into the models, the researchers in their blog, have given the example of lending money via credit score. The strategies or metrics that were used to classify whether an individual qualifies for loan or not, were unfair at times according to their analysis.

In their paper titled, Fairness is not static, they discuss in detail about how the simulation experiments were carried out. They divided the agents in the environment into 3 types.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

  • a static agent that implements a naïve, one-shot classification strategy.
  • a robust agent that implements a similar one-shot policy, but uses the robust classification algorithm.
  • Then a continuous agent that gathers an initial set of unmanipulated applicants, then continuously retrains a non-robust classifier based on the subsequent manipulated scores and labels that it observes.

The continuous agent, believe the researchers, is a reasonable model of deployed machine learning systems.

Using the gym, the Google team has found that in the lending money experiment, the equal opportunity agent (EO agent) overlends to the disadvantaged group (which initially has a lower average credit score) by sometimes applying a lower threshold for the group than would be applied by the max reward agent.

This causes the credit scores of one group to decrease more than other group, resulting in a wider credit score gap between the groups than in the simulations with the max reward agent.

Depending on whether the indicator of welfare is the credit score or total loans received, it could be argued that one agent is better or more detrimental to other groups than the max reward agent.

They also found out that equal opportunity constraints — enforcing equalised TPR between groups at each step — does not equalise TPR (true positive rates or actual positives) cases in aggregate over the simulation. […]

 

Read more – analyticsindiamag.com