Bias in a global society
copyright by www.i-programmer.info
A new study has revealed that AI systems, such as Google Translate, acquire the same cultural biases as humans. While this isn’t a surprising finding, it comes aa s cause for concern and remedial action.
Arvind Narayanan, an assistant professor of computer science at Princeton-affiliated faculty member to the CITP explained the rationale for this research: “Questions about fairness and bias in machine learning are tremendously important for our society. We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.” The tool used for research into human biases is the Implicit Association Test which measures response times (in milliseconds) by subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, for concepts perceived as similar than for those regarded dissimilar.
Measure bias in algorithm
The Princeton team developed a similar way to measure biases in AI systems that acquire language from human texts. Rather than measuring lag time, however, their Word-Embedding Association Test uses associations between words, analyzing roughly 2.2 million words in total. In particular, they relied on GloVe (Global Vectors for Word Representation) an open source program developed by Stanford researchers for measuring the linguistic or semantic similarity of words regarding co-occurrence and proximity. They used this to approach to look at words like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
Machines learn from humans
Aylin Caliskan explains that female names are associated with family terms, whereas male names are associated with career terms and demonstrates how AI perpetuates gender stereotypes using Google Translate and Turkish, a language that has three pronouns – he, she and it. In an interview with the Guardian newspaper, Bryson says: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.” […]
read more – copyright by www.i-programmer.info
Bias in a global society
copyright by www.i-programmer.info
A new study has revealed that AI systems, such as Google Translate, acquire the same cultural biases as humans. While this isn’t a surprising finding, it comes aa s cause for concern and remedial action.
Arvind Narayanan, an assistant professor of computer science at Princeton-affiliated faculty member to the CITP explained the rationale for this research: “Questions about fairness and bias in machine learning are tremendously important for our society. We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.” The tool used for research into human biases is the Implicit Association Test which measures response times (in milliseconds) by subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, for concepts perceived as similar than for those regarded dissimilar.
Measure bias in algorithm
The Princeton team developed a similar way to measure biases in AI systems that acquire language from human texts. Rather than measuring lag time, however, their Word-Embedding Association Test uses associations between words, analyzing roughly 2.2 million words in total. In particular, they relied on GloVe (Global Vectors for Word Representation) an open source program developed by Stanford researchers for measuring the linguistic or semantic similarity of words regarding co-occurrence and proximity. They used this to approach to look at words like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
Machines learn from humans
Aylin Caliskan explains that female names are associated with family terms, whereas male names are associated with career terms and demonstrates how AI perpetuates gender stereotypes using Google Translate and Turkish, a language that has three pronouns – he, she and it. In an interview with the Guardian newspaper, Bryson says: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.” […]
read more – copyright by www.i-programmer.info
Share this: