For the past few years, the world’s biggest tech companies have been on a mission to put artificial-intelligence tools in the hands of every coder. The benefits are clear: Coders familiar with free AI frameworks from Google, Amazon, Microsoft, or Facebook might be more inclined to someday work for one of those talent-starved companies. Even if they don’t, selling pre-built AI tools to other companies has become big business for Google, Amazon, and Microsoft.
copyright by qz.com
Today these same companies are under fire from their employees over who this technology is being sold to, namely branches of the US government like the Department of Defense and Immigration and Customs Enforcement. Workers from Google , Microsoft , and now Amazon have signed petitions and quit in protest of the government work. It’s had some impact: Google released AI ethics principles and publicly affirmed it would not renew its contract with the Department of Defense in 2019. Microsoft told employees in an email that it wasn’t providing AI services to ICE, though that contradicts earlier descriptions of the contract on the company’s website, according to Gizmodo .
This debate, playing out in a very public manner, marks a major shift in how tech companies and their employees talk about artificial intelligence. Until now, everyone has been preaching the Gospel of Good AI: Microsoft CEO Satya Nadella has called it the most transformational technology of a generation, and Google CEO Sundar Pichai has gone even further, saying AI will have comparable impact to fire and electricity.
That impact goes well beyond ad targeting and tagging photos on Facebook, whether researchers like it or not, says Gregory C. Allen, an adjunct fellow at the Center for New American Security, who co-authored a report called “Artificial intelligence and national security” last year. While we’re collectively OK with AI being used to spot celebrities at the royal wedding, some of us have second thoughts when it comes to picking out protestors in a crowd.
“There’s no opt-out choice, where I, as an AI researcher, say that there will never be national security implications of AI. Thomas Edison was not in a position to say that there will never be national security implications of electricity,” Allen told Quartz. “That’s an inherent property of useful technologies, that they tend to be useful for military applications.”
AI like facial recognition is what’s called a dual-use technology, meaning its implementation can have positive or negative impact depending on how it’s used. Researchers from OpenAI and the Future of Humanity Institute, as well as Allen at CNAS, grappled with this distinction in a February 2018 report laying out the potential dangers of AI. […]
read more – copyright by qz.com
For the past few years, the world’s biggest tech companies have been on a mission to put artificial-intelligence tools in the hands of every coder. The benefits are clear: Coders familiar with free AI frameworks from Google, Amazon, Microsoft, or Facebook might be more inclined to someday work for one of those talent-starved companies. Even if they don’t, selling pre-built AI tools to other companies has become big business for Google, Amazon, and Microsoft.
copyright by qz.com
Today these same companies are under fire from their employees over who this technology is being sold to, namely branches of the US government like the Department of Defense and Immigration and Customs Enforcement. Workers from Google , Microsoft , and now Amazon have signed petitions and quit in protest of the government work. It’s had some impact: Google released AI ethics principles and publicly affirmed it would not renew its contract with the Department of Defense in 2019. Microsoft told employees in an email that it wasn’t providing AI services to ICE, though that contradicts earlier descriptions of the contract on the company’s website, according to Gizmodo .
This debate, playing out in a very public manner, marks a major shift in how tech companies and their employees talk about artificial intelligence. Until now, everyone has been preaching the Gospel of Good AI: Microsoft CEO Satya Nadella has called it the most transformational technology of a generation, and Google CEO Sundar Pichai has gone even further, saying AI will have comparable impact to fire and electricity.
That impact goes well beyond ad targeting and tagging photos on Facebook, whether researchers like it or not, says Gregory C. Allen, an adjunct fellow at the Center for New American Security, who co-authored a report called “Artificial intelligence and national security” last year. While we’re collectively OK with AI being used to spot celebrities at the royal wedding, some of us have second thoughts when it comes to picking out protestors in a crowd.
“There’s no opt-out choice, where I, as an AI researcher, say that there will never be national security implications of AI. Thomas Edison was not in a position to say that there will never be national security implications of electricity,” Allen told Quartz. “That’s an inherent property of useful technologies, that they tend to be useful for military applications.”
AI like facial recognition is what’s called a dual-use technology, meaning its implementation can have positive or negative impact depending on how it’s used. Researchers from OpenAI and the Future of Humanity Institute, as well as Allen at CNAS, grappled with this distinction in a February 2018 report laying out the potential dangers of AI. […]
read more – copyright by qz.com
Share this: