In less than five years, a 2012 academic breakthrough in evolved into the technology responsible for making healthcare decisions , deciding whether prisoners should go free , and determining what we see on the internet.
Machine learning is beginning to invisibly touch nearly every aspect of our lives; its ability to automate decision making challenges the future roles of experts and unskilled laborers alike. Hospitals might need fewer doctors, thanks to automated treatment planning, and truck drivers might not be required by 2030.
But it’s not just about jobs. Serious questions are starting to be raised about whether the decisions made by can be trusted. Research suggests that these algorithms are easily biased by the data from which they learn, meaning societal biases are reinforced and magnified in the code. That could mean certain job applicants get excluded from consideration when hiring software is used to scan resumes. Even more, the decision-making process of these algorithms is so complex that researchers can’t definitively say why one decision was made over another. And while that may be disconcerting to laymen, there’s an industry debate over how valuable knowing those internal mechanisms really is, meaning research may very well forge ahead with the understanding that we simply don’t need to understand .
Governments’ need to understand
Until this year, these questions typically came from academics and researchers skeptical of the breakneck pace that Silicon Valley was implementing . But 2017 brought new organizations spanning big tech companies, academics, and governments dedicated to understanding the societal impacts of .
“The reason is simple— has moved from research to reality, from the realm of science fiction to the reality of everyday use,” Oren Etzioni, executive director of the Allen Institute for , tells Quartz. The Allen Institute, founded in 2012, predates much of the contemporary conversation on and society, having published research on ethical and legal considerations of design.
Here’s a quick chronological list of 2017’s entrants to the conversation:
- Ethics and Governance of Artificial Intelligence Fund, founded January 2017 with investment from Reid Hoffman, Omidyar Network and the Knight Foundation to research ethical design, communication, and public policy.
- Partnership on , founded September 2016 but announced first initiatives May 2017. Industry-led organization to form best-practices for ethical and safe creation, with founding members Amazon, Apple, Facebook, Google, IBM, and Microsoft. More than 50 other companies have joined since founding.
- People and Research, founded July 2017 by Google to study how machines and humans interact.
- DeepMind Ethics & Society, founded October 2017 to study ethics, safety, and accountability.
- Now Institute, founded November 2017 by Microsoft’s Kate Crawford and Google’s Meredith Whittaker. Meant to generate core research on societal impacts of .
- Proposed US Department of Commerce committee on , a bill drafted December 2017 by Senator Maria Cantwell suggests wide-reaching recommendations on how to regulate .
These organizations aren’t just computer scientists and tech executives. The Partnership on board touts the executive director of ACLU Massachusetts and a former Obama economic adviser. A California Supreme Court justice sits on the Now Institute’s advisory board, alongside the president of the NAACP Legal Defense Fund. […]