“We don’t see things as they are, we see them as we are.” So wrote Anais Nin, rather succinctly describing the unfortunate melange of biases that accompany these otherwise perfectly well-functioning human brains of ours.
Copyright: forbes.com – “How To Use AI To Eliminate Bias”
In a business context, affinity bias, confirmation bias, attribution bias, and the halo effect, some of the better known of these errors of reasoning, really just scratch the surface. In aggregate, they leave a trail of offenses and errors in their wake.
Of course, the most pernicious of our human biases are those that prejudice us for or against our fellow humans on the basis of age, race, gender, religion, or physical appearance. Try as we do to purify ourselves, our work environments, and our society from these distortions, they still worm their way into—well, just about everything that we think and do—even modern technologies, like AI.
Critics say that AI makes bias worse
Since AI was first deployed in hiring, loan approvals, insurance premium modeling, facial recognition, law enforcement, and a constellation of other applications, critics have—with considerable justification—pointed out the technology’s propensity for bias.
Google’s Bidirectional Encoder Representations from Transformers (BERT), for example, is a leading Natural Language Processing (NLP) model that developers can use to build their own AI. BERT was originally built using Wikipedia text as its principle source. What’s wrong with that? The overwhelming majority of Wikipedia’s contributors are white males from Europe and North America. As a result, one of the most important sources of language-based AI began its life with a biased perspective baked-in.
A similar problem was found in Computer Vision, another key area of AI development. Facial recognition datasets comprising hundreds of thousands of annotated faces are critical to the development of facial recognition applications used for cybersecurity, law enforcement, and even customer service. It turned out, however, that the (presumably mostly white, middle-aged male) developers unconsciously did a better job achieving accuracy for people like themselves. Error rates for women, children, the elderly, and people of color were much higher than those for middle-aged white men. As a result, IBM, Amazon, and Microsoft were forced to cease sales of their facial recognition technology to law enforcement in 2020, for fear that these biases would result in wrongful identification of suspects.[…]
Read more: www.forbes.com