When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.
copyright by www.bloomberg.com
While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias—racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color. So earlier this year, Gebru, 34, joined a Microsoft Corp. team called FATE —for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.
How to go bias free?
“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of color to join the artificial intelligence field. “Even my own Phd work suffers from whatever issues you’d have with dataset bias.” In the popular imagination, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet. The reality (at least for now) turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world. Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities.
“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.
Researchers at Microsoft, International Business Machines Corp. and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents—including an AI beauty contest that chose predominantly white faces as winners—some of the best minds in the business are working on the bias problem. The issue will be a key topic at the Conference on Neural Information Processing Systems, an annual confab that starts today in Long Beach, California, and brings together AI scientists from around the world. […]
read more – copyright by www.bloomberg.com
When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.
copyright by www.bloomberg.com
While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias—racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color. So earlier this year, Gebru, 34, joined a Microsoft Corp. team called FATE —for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.
How to go bias free?
“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of color to join the artificial intelligence field. “Even my own Phd work suffers from whatever issues you’d have with dataset bias.” In the popular imagination, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet. The reality (at least for now) turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world. Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities.
“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.
Researchers at Microsoft, International Business Machines Corp. and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents—including an AI beauty contest that chose predominantly white faces as winners—some of the best minds in the business are working on the bias problem. The issue will be a key topic at the Conference on Neural Information Processing Systems, an annual confab that starts today in Long Beach, California, and brings together AI scientists from around the world. […]
read more – copyright by www.bloomberg.com
Share this: