Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets.
copyright by spectrum.ieee.org
A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts. The results have huge implications given machine learning AI’s popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests. In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine learning AI on a “Common Crawl” body of text—2.2 million different words—collected from the Internet.
A mirror of society
In some more neutral examples, the AI system was more likely to associate words such as “flower” and “music” with “pleasant” rather than less pleasant words such as “insects” and “weapons.” But the AI was also more likely to associate European American names rather than African-American names with “pleasant” stimuli. It also tended to associate “woman” and “girl” with the arts rather than with mathematics. “In all cases where machine learning aids in perceptual tasks, the worry is that if machine learning is replicating human biases, it’s also reflecting that back at us,” says Arvind Narayanan , a computer scientist at the Center for Information Technology Policy in Princeton University. “Perhaps it would further create a feedback loop in perpetuating those biases.”
An algorithm is only as objective as the data
To reveal the biases that can arise in natural language learning, Narayanan and his colleagues created new statistical tests based on the Implicit Association Test (IAT) used by psychologists to reveal human biases. Their work detailed in the 14 April 2017 issue of the journal Science is the first to show such human biases in “word embedding”—a statistical modeling technique commonly used in machine learning and natural language processing. Word embedding involves mapping individual words to different points in space and analyzing the semantic relationships among those points by representing them as geometric relationships. The idea of AI picking up the biases within the language texts it trained on may not sound like an earth-shattering revelation. But the study helps put the nail in the coffin of the old argument about AI automatically being more objective than humans, says Suresh Venkatasubramanian, a computer scientist at the University of Utah who was not involved in the study.
Implications of biased Data and Models
To understand the possible implications, one only need look at the Pulitzer Prize finalist “Machine Bias” series by ProPublica that showed how a computer program designed to predict future criminals is biased against black people. Given such stakes, some researchers are considering how to deploy machine learning in a way that recognizes and mitigates the harmful effects of human biases. “Trained models are only as good as the training process and the data they are trained on,” Venkatasubramanian says. “They don’t magically acquire objectivity by virtue of existing.”
read more – copyright by spectrum.ieee.org
Artificial intelligence does not automatically rise above human biases regarding gender and race. On the contrary, machine learning algorithms that represent the cutting edge of AI in many online services and apps may readily mimic the biases encoded in their training datasets.
copyright by spectrum.ieee.org
A new study has shown how AI learning from existing English language texts will exhibit the same human biases found in those texts. The results have huge implications given machine learning AI’s popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests. In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine learning AI on a “Common Crawl” body of text—2.2 million different words—collected from the Internet.
A mirror of society
In some more neutral examples, the AI system was more likely to associate words such as “flower” and “music” with “pleasant” rather than less pleasant words such as “insects” and “weapons.” But the AI was also more likely to associate European American names rather than African-American names with “pleasant” stimuli. It also tended to associate “woman” and “girl” with the arts rather than with mathematics. “In all cases where machine learning aids in perceptual tasks, the worry is that if machine learning is replicating human biases, it’s also reflecting that back at us,” says Arvind Narayanan , a computer scientist at the Center for Information Technology Policy in Princeton University. “Perhaps it would further create a feedback loop in perpetuating those biases.”
An algorithm is only as objective as the data
To reveal the biases that can arise in natural language learning, Narayanan and his colleagues created new statistical tests based on the Implicit Association Test (IAT) used by psychologists to reveal human biases. Their work detailed in the 14 April 2017 issue of the journal Science is the first to show such human biases in “word embedding”—a statistical modeling technique commonly used in machine learning and natural language processing. Word embedding involves mapping individual words to different points in space and analyzing the semantic relationships among those points by representing them as geometric relationships. The idea of AI picking up the biases within the language texts it trained on may not sound like an earth-shattering revelation. But the study helps put the nail in the coffin of the old argument about AI automatically being more objective than humans, says Suresh Venkatasubramanian, a computer scientist at the University of Utah who was not involved in the study.
Implications of biased Data and Models
To understand the possible implications, one only need look at the Pulitzer Prize finalist “Machine Bias” series by ProPublica that showed how a computer program designed to predict future criminals is biased against black people. Given such stakes, some researchers are considering how to deploy machine learning in a way that recognizes and mitigates the harmful effects of human biases. “Trained models are only as good as the training process and the data they are trained on,” Venkatasubramanian says. “They don’t magically acquire objectivity by virtue of existing.”
read more – copyright by spectrum.ieee.org
Share this: