First step towards operationalizing AI ethics

Author and Copyright: Anand Rao and Ilana Golbin via 

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIn the first part of this series, we looked at AI risks from five dimensions. We talked about the dark side of AI, without really going into how we would manage and mitigate these risks. In this and subsequent articles, we will look at how to exploit the benefits of AI, while at the same time guarding against the risks.

A quick plot of search trends shows that the words “AI Ethics”, “Ethical AI”, “Beneficial AI”, “Trustworthy AI”, and “Responsible AI” started becoming extremely popular over the past five years. In my (first author’s) early exploits of AI in the ’80s and ‘90s talking about ethics was relegated to a small fringe of academics and definitely not a topic of conversation in the business world — at least with respect to AI ethics. It is not surprising that these terms are trending — given both the adoption of AI and the substantial risks of AI that we examined earlier in the series.

But what do all these terms really mean? Who is coming up with all these terms? What does it really mean for a company — especially if you are not a technology company using and promoting AI.

What is in a Name?

What’s in a name. That which we call a rose by any other name would smell as sweet.

This famous quote is by William Shakespeare in his play “Romeo and Juliet”. Does it really matter what we call a set of principles for AI — isn’t “Responsible AI” as good a term as “Trustworthy AI” or “Ethical AI” or “Beneficial AI”? Unfortunately, the answer is an emphatic NO. What we call some of these principles really matter. […]

Read more: