Elon Musk recently warned that artificial intelligence is the “ biggest risk we face as a civilization .” While perhaps at some point we’ll need to begin worrying about overly smart machines, I’m a little more Zuckerbergian in my beliefs.
copyright by www.forbes.com
What Musk’s statement highlights, though, is something that’s pretty pervasive throughout society — the fear that autonomous AI is going to make humans obsolete coupled with the fear that AI exists completely independently of human guidance. Isn’t it scary to think about something with no real moral barometer making wide-ranging decisions about how we live our lives? Here’s the thing: AI isn’t an arbiter of ethics, and it’s not going to be in the foreseeable future. But what AI can do in the ethics space is support and augment our own decision making — and act as an alarm when we’re getting it wrong. Get it right, though? That’s on us.
Using AI As An Early-Warning System
Take the United Airlines incident from this past April. United’s tone-deaf PR response clearly stemmed from the misguided belief that the incident would blow over and be forgotten. And maybe in a pre-social media environment, it might have. But United didn’t look at the data, and the company got it wrong. If it had the time, inclination and wherewithal to survey the situation and assess the potential fallout, it might have responded differently. But humans only have so much bandwidth. This is an example where AI could have been used to monitor and alert on the data, precipitating a timely, appropriate response instead of a kneejerk one.
AI Is Only As Good As Its Data
But if AI can be trained to alert us to particular ethical issues, can’t it also be trained to make ethical judgments? Maybe. But AI would need to be trained on ethics and would only ever be as ethical as it was trained to be. The problem is that ethics are generally something that humans feel rather than intellectualize. We can’t train that feeling, and though we could give AI examples of ethical behavior, it’s inevitable that we’d miss some. Remember Microsoft’s Tay, a Twitter chatbot designed to learn from its interactions with others in the Twitterverse? The bot quickly became racist and foul-mouthed — the result of being fed data by a series of trolls from 4chan. […]
read more – copyright by www.forbes.com
Elon Musk recently warned that artificial intelligence is the “ biggest risk we face as a civilization .” While perhaps at some point we’ll need to begin worrying about overly smart machines, I’m a little more Zuckerbergian in my beliefs.
copyright by www.forbes.com
What Musk’s statement highlights, though, is something that’s pretty pervasive throughout society — the fear that autonomous AI is going to make humans obsolete coupled with the fear that AI exists completely independently of human guidance. Isn’t it scary to think about something with no real moral barometer making wide-ranging decisions about how we live our lives? Here’s the thing: AI isn’t an arbiter of ethics, and it’s not going to be in the foreseeable future. But what AI can do in the ethics space is support and augment our own decision making — and act as an alarm when we’re getting it wrong. Get it right, though? That’s on us.
Using AI As An Early-Warning System
Take the United Airlines incident from this past April. United’s tone-deaf PR response clearly stemmed from the misguided belief that the incident would blow over and be forgotten. And maybe in a pre-social media environment, it might have. But United didn’t look at the data, and the company got it wrong. If it had the time, inclination and wherewithal to survey the situation and assess the potential fallout, it might have responded differently. But humans only have so much bandwidth. This is an example where AI could have been used to monitor and alert on the data, precipitating a timely, appropriate response instead of a kneejerk one.
AI Is Only As Good As Its Data
But if AI can be trained to alert us to particular ethical issues, can’t it also be trained to make ethical judgments? Maybe. But AI would need to be trained on ethics and would only ever be as ethical as it was trained to be. The problem is that ethics are generally something that humans feel rather than intellectualize. We can’t train that feeling, and though we could give AI examples of ethical behavior, it’s inevitable that we’d miss some. Remember Microsoft’s Tay, a Twitter chatbot designed to learn from its interactions with others in the Twitterverse? The bot quickly became racist and foul-mouthed — the result of being fed data by a series of trolls from 4chan. […]
read more – copyright by www.forbes.com
Share this: