What is ethical Artificial Intelligence: As more organizations implement Artificial Intelligence technology into their processes, leaders are taking a closer look at AI bias and ethical considerations. Consider these key questions

Copyright by enterprisersproject.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningDo you have some anxiety about Artificial Intelligence (AI) bias or related issues? You’re not alone. Nearly all business leaders surveyed for Deloitte’s third State of AI in the Enterprise report expressed concerns around the ethical risks of their AI initiatives.

There is certainly some cause for uneasiness. Nine out of ten respondents to a late 2020 Capgemini Research Institute survey were aware of at least one instance where an AI system had resulted in ethical issues for their businesses. Nearly two-thirds have experienced the issue of discriminatory bias with AI systems, six out of ten indicated their organizations had attracted legal scrutiny as a result of AI applications, and 22 percent have said they suffered customer backlash because of these decisions reached by AI systems.

As Capgemini leaders pointed out in their recent blog post: “Enterprises exploring the potential of AI need to ensure they apply AI the right way and for the right purposes. They need to master Ethical AI.

7 Artificial Intelligence ethics questions leaders often hear

While organizations aggressively pursue increased AI capabilities, they will look to IT and data science leaders to explain the risks and best practices around ethical and trusted AI. “In a future where AI is ubiquitous, adopters should be creative, become smarter AI consumers, and establish themselves as trustworthy guardians of customer data in order to remain relevant and stay ahead of the competition,” says Paul Silverglate, vice chair and U.S. technology sector leader for Deloitte.

Here, AI experts address some common questions about ethical AI. You may hear these from colleagues, customers, and others. Consider them in the context of your organization:

1. Isn’t AI itself inherently ethical and unbiased?

It may seem that technology is neutral, but that is not exactly the case. AI is only as equitable as the humans that create it and the data that feeds it. “Machine learning that supports automation and AI technologies is not created by neutral parties, but instead by humans with bias,” explains Siobhan Hanna, managing director of AI data solutions for digital customer experience services provider Telus International.

“We might never be able to eliminate bias, but we can understand bias and limit the impact it has on AI-enabled technologies. This will be important as the cutting-edge, AI-supported technology of today can and will become outdated rapidly.”

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


2. What is ethical AI?

While AI or algorithmic bias is one concern that the ethical use of AI aims to mitigate, it is not the only one. Ethical AI considers the full impact of AI usage on all stakeholders, from customers and suppliers to employees and society as a whole. Ethical AI seeks to prevent or root out potentially “bad, biased, and unethical” uses of AI. “Artificial intelligence has limitless potential to positively impact our lives, and while companies might have different approaches, the process of building AI solutions should always be people-centered,” says Telus International’s Hanna.

“Responsible AI considers the technology’s impact not only on users but on the broader world, ensuring that its usage is fair and responsible,” Hanna explains. “This includes employing diverse AI teams to mitigate biases, ensure appropriate representation of all users, and publicly state privacy and security measures around data usage and personal information collection and storage.”

3. How big a concern is ethical AI?

It’s top of mind from board rooms (where C-suite leaders are becoming aware of risks like biased AI) to break rooms (where employees worry about the impact of intelligent automation on jobs). […]

Read more: enterprisersproject.com