Gfycat’s facial recognition software can now recognize individual members of K-pop band Twice, but in early tests couldn’t distinguish different Asian faces. Software engineer Henry Gan got a surprise last summer when he tested his team’s new facial recognition system on coworkers at startup Gfycat.
copyright by www.wired.com
The machine-learning software successfully identified most of his colleagues, but the system stumbled with one group. “It got some of our Asian employees mixed up,” says Gan, who is Asian. “Which was strange because it got everyone else correctly.”
Gan could take solace from the fact that similar problems have tripped up much larger companies. Research released last month found that facial-analysis services offered by Microsoft and IBM were at least 95 percent accurate at recognizing the gender of lighter-skinned women, but erred at least 10 times more frequently when examining photos of dark-skinned women. Both companies claim to have improved their systems, but declined to discuss how exactly. In January, WIRED found that Google’s Photos service is unresponsive to searches for the terms gorilla, chimpanzee, or monkey . The censorship is a safety feature to prevent repeats of a 2015 incident in which the service mistook photos of black people for apes.
The danger of bias in AI systems is drawing growing attention from both corporate and academic researchers. Machine learning shows promise for diverse uses such as enhancing consumer products and making companies more efficient. But evidence is accumulating that this supposedly smart software can pick up or reinforce social biases.
That’s becoming a bigger problem as research and software is shared more widely and more enterprises experiment with AI technology. The industry’s understanding of how to test, measure and prevent bias has not kept up. “Lots of companies are now taking these things seriously, but the playbook for how to fix them is still being written,” says Meredith Whittaker, co-director of AI Now, an institute focused on ethics and artificial intelligence at New York University.
Gfycat dove into facial recognition to help people find the perfect animated GIF response when messaging friends. The company provides a search engine that trawls nearly 50 million looping clips, from kitten fails to presidential facial expressions. By adding facial recognition, executives thought they could improve the quality of searches for public figures like movie or music stars.
As a 17-person startup, Gfycat doesn’t have a giant AI lab inventing new machine learning tools. The company used open-source facial-recognition software based on research from Microsoft, and trained it with millions of photos from collections released by the Universities of Illinois and Oxford. But as well as showing a kind of Asian blindness around the office, the system proved unable to distinguish Asian celebrities such as Constance Wu and Lucy Liu. It also performed poorly on people with darker skin tones. […]
read more – copyright by www.wired.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Gfycat’s facial recognition software can now recognize individual members of K-pop band Twice, but in early tests couldn’t distinguish different Asian faces. Software engineer Henry Gan got a surprise last summer when he tested his team’s new facial recognition system on coworkers at startup Gfycat.
copyright by www.wired.com
The machine-learning software successfully identified most of his colleagues, but the system stumbled with one group. “It got some of our Asian employees mixed up,” says Gan, who is Asian. “Which was strange because it got everyone else correctly.”
Gan could take solace from the fact that similar problems have tripped up much larger companies. Research released last month found that facial-analysis services offered by Microsoft and IBM were at least 95 percent accurate at recognizing the gender of lighter-skinned women, but erred at least 10 times more frequently when examining photos of dark-skinned women. Both companies claim to have improved their systems, but declined to discuss how exactly. In January, WIRED found that Google’s Photos service is unresponsive to searches for the terms gorilla, chimpanzee, or monkey . The censorship is a safety feature to prevent repeats of a 2015 incident in which the service mistook photos of black people for apes.
The danger of bias in AI systems is drawing growing attention from both corporate and academic researchers. Machine learning shows promise for diverse uses such as enhancing consumer products and making companies more efficient. But evidence is accumulating that this supposedly smart software can pick up or reinforce social biases.
That’s becoming a bigger problem as research and software is shared more widely and more enterprises experiment with AI technology. The industry’s understanding of how to test, measure and prevent bias has not kept up. “Lots of companies are now taking these things seriously, but the playbook for how to fix them is still being written,” says Meredith Whittaker, co-director of AI Now, an institute focused on ethics and artificial intelligence at New York University.
Gfycat dove into facial recognition to help people find the perfect animated GIF response when messaging friends. The company provides a search engine that trawls nearly 50 million looping clips, from kitten fails to presidential facial expressions. By adding facial recognition, executives thought they could improve the quality of searches for public figures like movie or music stars.
As a 17-person startup, Gfycat doesn’t have a giant AI lab inventing new machine learning tools. The company used open-source facial-recognition software based on research from Microsoft, and trained it with millions of photos from collections released by the Universities of Illinois and Oxford. But as well as showing a kind of Asian blindness around the office, the system proved unable to distinguish Asian celebrities such as Constance Wu and Lucy Liu. It also performed poorly on people with darker skin tones. […]
read more – copyright by www.wired.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: