According to Hollywood, we are all just one short step away from living in a world run by computers. Artificial intelligence (AI) is advancing rapidly into territory that was previously in the realm of science fiction. Some see this future as a welcome utopia while others live in fear of life with Terminator- or Matrix-style outcomes. No matter which outcome you believe is most likely, the truth probably lies somewhere in between.
You see, the biggest problem with computers is also their greatest advantage – they are programmed by humans. One way many companies are hoping to use AI is in creating greater hiring diversity that is free from bias. A recent Deloitte survey found that more than 30 percent of respondents were already utilizing some form of AI in their recruitment and hiring process. While this seems like a boon for gender and racial diversity in general, there are still some problems.
Discrimination in the workplace
Removing bias in the workplace is a complicated issue. In part, this is because a person’s perception of discrimination is largely determined by their own subjective experiences.
According to studies conducted by the Pew Research Center, an individual’s perception and experience of discrimination can be influenced by their gender , race , and age . These studies demonstrate that even when people broadly agree that discrimination is occurring, the ways in which it is defined can be dramatically different.
Further complicating the issue for humans is the environmental context of the potential discrimination. A recent study showed that women working in male-dominated workplaces were more likely to report a higher rate of discrimination based on gender.
As humans, there are so many variables to be taken into consideration when defining and addressing discrimination that even with the best of intentions, it can be difficult.
When tech companies are creating the coding that governs the AI being used to make hiring decisions (and are made up of primarily white men), the issue becomes even more complicated to navigate successfully. Even with the best of intentions, the subjective experiences of these men may make it difficult for them to create products that are free from their own unconscious biases.
Joy Buolamwini brought a lot of attention to unintended bias, which can be perpetrated rapidly via software and AI. In her TED talk she explained, “Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace. Algorithmic bias can also lead to exclusionary experiences and discriminatory practices.”
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
In her work as a graduate student at MIT, she discovered that facial recognition software was unable to recognize dark-skinned faces as effectively as those of lighter-skinned individuals.
She expanded her testing to include AI-powered systems from larger companies, including IBM and Microsoft, and found that while those systems were adept at identifying white male faces, they performed poorly when tasked with discerning between dark-skinned faces. The results were worst when the dark-skinned faces were of women, showing two biases at play. […]