Automation Government Oil Industry Research

AI Learns Gender and Racial Biases from Language

Artificial intelligenceArtificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. does not automatically rise above human biases regarding gender and race. On the contrary, machine algorithmsAn algorithm is a fixed set of instructions for a computer. It can be very simple like "as long as the incoming number is smaller than 10, print "Hello World!". It can also be very complicated such as the algorithms behind self-driving cars. that represent the cutting edge of in many online services and apps may readily mimic the biases encoded in their training datasets.

A new study has shown how from existing English language texts will exhibit the same human biases found in those texts. The results have huge implications given machine ’s popularity among Silicon Valley tech giants and many companies worldwide. Psychologists previously showed how unconscious biases can emerge during word association experiments known as implicit association tests. In the new study, computer scientists replicated many of those biases while training an off-the-shelf machine on a “Common Crawl” body of text—2.2 million different words—collected from the Internet.

A mirror of society

In some more neutral examples, the system was more likely to associate words such as “flower” and “music” with “pleasant” rather than less pleasant words such as “insects” and “weapons.” But the was also more likely to associate European American names rather than African-American names with “pleasant” stimuli. It also tended to associate “woman” and “girl” with the arts rather than with mathematics. “In all cases where machine aids in perceptual tasks, the worry is that if machine is replicating human biases, it’s also reflecting that back at us,” says Arvind Narayanan , a computer scientist at the Center for Information Technology Policy in Princeton University. “Perhaps it would further create a feedback loop in perpetuating those biases.”

An algorithm is only as objective as the data

To reveal the biases that can arise in natural language , Narayanan and his colleagues created new statistical tests based on the Implicit Association Test (IAT) used by psychologists to reveal human biases. Their work detailed in the 14 April 2017 issue of the journal Science is the first to show such human biases in “word embedding”—a statistical modeling technique commonly used in machine and . Word embedding involves mapping individual words to different points in space and analyzing the semantic relationships among those points by representing them as geometric relationships. The idea of picking up the biases within the language texts it trained on may not sound like an earth-shattering revelation. But the study helps put the nail in the coffin of the old argument about automatically being more objective than humans, says Suresh Venkatasubramanian, a computer scientist at the University of Utah who was not involved in the study.

Implications of biased Data and Models

To understand the possible implications, one only need look at the Pulitzer Prize finalist “Machine Bias” series by ProPublica that showed how a computer program designed to predict future criminals is biased against black people. Given such stakes, some researchers are considering how to deploy machine in a way that recognizes and mitigates the harmful effects of human biases. “Trained models are only as good as the training process and the data they are trained on,” Venkatasubramanian says. “They don’t magically acquire objectivity by virtue of existing.”

  1. SwissCognitive

    #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t.co/NXhQ1pIuXC

  2. Ishag Shafeeg

    RT @SwissCognitive: #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t…

  3. Jeet Dholakia

    RT @SwissCognitive: #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t…

  4. Christopher Burnette

    RT @SwissCognitive: #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t…

  5. shital manga

    RT @SwissCognitive: #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t…

  6. Amyth Banerjee

    RT @DalithSteiger: #AI Learns Gender and Racial Biases from Language

    #DL #Machine_Learning #Music #News #NLP #Technology #Trend
    https://t.…

  7. Dr Aliya Shah

    #AI Learns #Gender and #Racial Biases from #Language https://t.co/PZzgUW4gQ8

Leave a Reply