Automation Automotive Education Government Oil Industry Research

Investigating Bias In AI Language Learning

Bias in a global society

copyright by www.i-programmer.info

A new study has revealed that systems, such as Google Translate, acquire the same cultural biases as humans. While this isn’t a surprising finding, it comes aa s cause for concern and remedial action.

Arvind Narayanan, an assistant professor of computer science at Princeton-affiliated faculty member to the CITP explained the rationale for this research: “Questions about fairness and bias in machine are tremendously important for our society. We have a situation where these artificial intelligenceArtificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.” The tool used for research into human biases is the Implicit Association Test which measures response times (in milliseconds) by subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, for concepts perceived as similar than for those regarded dissimilar.

Measure bias in algorithm

The Princeton team developed a similar way to measure biases in systems that acquire language from human texts. Rather than measuring lag time, however, their Word-Embedding Association Test uses associations between words, analyzing roughly 2.2 million words in total. In particular, they relied on GloVe (Global Vectors for Word Representation) an open source program developed by Stanford researchers for measuring the linguistic or semantic similarity of words regarding co-occurrence and proximity. They used this to approach to look at words like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

Machines learn from humans

Aylin Caliskan explains that female names are associated with family terms, whereas male names are associated with career terms and demonstrates how perpetuates gender stereotypes using Google Translate and Turkish, a language that has three pronouns – he, she and it.  In an interview with the Guardian newspaper, Bryson says: “A lot of people are saying this is showing that is prejudiced. No. This is showing we’re prejudiced and that is it.”  […]

read more – copyright by www.i-programmer.info

  1. SwissCognitive

    Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co/4xXXjYy1KE

  2. SIBMedu

    Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co/sGw4PbYQYb

  3. Patrick Rotzetter

    RT @SwissCognitive: Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co…

  4. Elmar Arunov

    RT @SwissCognitive: Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co…

  5. Guy

    RT @DalithSteiger: Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co/…

  6. Lynn Martin

    RT @andy_fitze: Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co/8j8…

  7. Lin-sideB

    RT @andy_fitze: Investigating Bias In #AI Language Learning

    #Digital_Personal_Assistant #DL #Education #Google #Interview
    https://t.co/8j8…

Leave a Reply