In less than five years, a 2012 academic breakthrough in artificial intelligence evolved into the technology responsible for making healthcare decisions , deciding whether prisoners should go free , and determining what we see on the internet.
copyright by qz.com
Machine learning is beginning to invisibly touch nearly every aspect of our lives; its ability to automate decision making challenges the future roles of experts and unskilled laborers alike. Hospitals might need fewer doctors, thanks to automated treatment planning, and truck drivers might not be required by 2030.
But it’s not just about jobs. Serious questions are starting to be raised about whether the decisions made by AI can be trusted. Research suggests that these algorithms are easily biased by the data from which they learn, meaning societal biases are reinforced and magnified in the code. That could mean certain job applicants get excluded from consideration when AI hiring software is used to scan resumes. Even more, the decision-making process of these algorithms is so complex that AI researchers can’t definitively say why one decision was made over another. And while that may be disconcerting to laymen, there’s an industry debate over how valuable knowing those internal mechanisms really is, meaning research may very well forge ahead with the understanding that we simply don’t need to understand AI.
Governments’ need to understand AI
Until this year, these questions typically came from academics and researchers skeptical of the breakneck pace that Silicon Valley was implementing AI. But 2017 brought new organizations spanning big tech companies, academics, and governments dedicated to understanding the societal impacts of artificial intelligence.
“The reason is simple—AI has moved from research to reality, from the realm of science fiction to the reality of everyday use,” Oren Etzioni, executive director of the Allen Institute for AI, tells Quartz. The Allen Institute, founded in 2012, predates much of the contemporary conversation on AI and society, having published research on ethical and legal considerations of AI design.
Here’s a quick chronological list of 2017’s entrants to the conversation:
- Ethics and Governance of Artificial Intelligence Fund, founded January 2017 with investment from Reid Hoffman, Omidyar Network and the Knight Foundation to research ethical AI design, communication, and public policy.
- Partnership on AI, founded September 2016 but announced first initiatives May 2017. Industry-led organization to form best-practices for ethical and safe AI creation, with founding members Amazon, Apple, Facebook, Google, IBM, and Microsoft. More than 50 other companies have joined since founding.
- People and AI Research, founded July 2017 by Google to study how machines and humans interact.
- DeepMind Ethics & Society, founded October 2017 to study AI ethics, safety, and accountability.
- AI Now Institute, founded November 2017 by Microsoft’s Kate Crawford and Google’s Meredith Whittaker. Meant to generate core research on societal impacts of AI.
- Proposed US Department of Commerce committee on AI, a bill drafted December 2017 by Senator Maria Cantwell suggests wide-reaching recommendations on how to regulate artificial intelligence.
These organizations aren’t just computer scientists and tech executives. The Partnership on AI board touts the executive director of ACLU Massachusetts and a former Obama economic adviser. A California Supreme Court justice sits on the AI Now Institute’s advisory board, alongside the president of the NAACP Legal Defense Fund. […]
read more – copyright by qz.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
In less than five years, a 2012 academic breakthrough in artificial intelligence evolved into the technology responsible for making healthcare decisions , deciding whether prisoners should go free , and determining what we see on the internet.
copyright by qz.com
Machine learning is beginning to invisibly touch nearly every aspect of our lives; its ability to automate decision making challenges the future roles of experts and unskilled laborers alike. Hospitals might need fewer doctors, thanks to automated treatment planning, and truck drivers might not be required by 2030.
But it’s not just about jobs. Serious questions are starting to be raised about whether the decisions made by AI can be trusted. Research suggests that these algorithms are easily biased by the data from which they learn, meaning societal biases are reinforced and magnified in the code. That could mean certain job applicants get excluded from consideration when AI hiring software is used to scan resumes. Even more, the decision-making process of these algorithms is so complex that AI researchers can’t definitively say why one decision was made over another. And while that may be disconcerting to laymen, there’s an industry debate over how valuable knowing those internal mechanisms really is, meaning research may very well forge ahead with the understanding that we simply don’t need to understand AI.
Governments’ need to understand AI
Until this year, these questions typically came from academics and researchers skeptical of the breakneck pace that Silicon Valley was implementing AI. But 2017 brought new organizations spanning big tech companies, academics, and governments dedicated to understanding the societal impacts of artificial intelligence.
“The reason is simple—AI has moved from research to reality, from the realm of science fiction to the reality of everyday use,” Oren Etzioni, executive director of the Allen Institute for AI, tells Quartz. The Allen Institute, founded in 2012, predates much of the contemporary conversation on AI and society, having published research on ethical and legal considerations of AI design.
Here’s a quick chronological list of 2017’s entrants to the conversation:
These organizations aren’t just computer scientists and tech executives. The Partnership on AI board touts the executive director of ACLU Massachusetts and a former Obama economic adviser. A California Supreme Court justice sits on the AI Now Institute’s advisory board, alongside the president of the NAACP Legal Defense Fund. […]
read more – copyright by qz.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: