While AI holds great promise for society, the speed of its advancement has far outpaced the ability of businesses and governments to monitor and assess the outcomes properly. The relative ambiguity of regulatory oversight throughout the world prevents AI from directly reflecting society’s needs. It is important that organizations take steps to enable and showcase trustworthiness to all stakeholders and build the reputation of the organization’s AI.

Copyright by www.weforum.org

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningTrust in AI starts with stakeholders understanding that a particular organization uses AI responsibly. It is unlikely that external stakeholders will identify individual AI systems as “trustworthy” or “untrustworthy”; rather, an organization is considered trustworthy or not and AI systems inherit the organization’s reputation. In the same way that an organization’s human staff showcases the organization’s values, the behaviours of the AI system are both a manifestation of and an influence on the organization’s reputation.

Training staff is a familiar challenge to most organizations, but the challenges of implementing ethical and trustworthy AI are new and different. They are, however, well documented with more than 90% of surveyed companies reporting some ethical issues. How can an organization do better?

In the current ambiguous regulatory environment, the complexity of AI systems drives organizations to seek new means to support their AI development. Most sophisticated technology companies, at a bare minimum, indicate they have the fairness, ethics, accountability and transparency (FEAT) principles at the centre of AI development and more than 170 organizations have published AI and data principles. However, many organizations tend to “build until something breaks” without consideration for who is affected by the break.

We believe that tangible progress toward the responsible use of technology can be made by taking advantage of people, process and technology to bridge these gaps. Here’s how:

1. People

Organization leaders – from managers to the C-suite and board of directors – often have little understanding of the assumptions and decisions made throughout the development and implementation of AI. Regardless of leaders’ understanding of the AI, they own the reputational and financial outcomes (both positive and negative). Data scientists, on the other hand, can find it challenging to take all the guidelines, regulations and organizational principles into account during the development process.

In both cases, the challenge is not generally a lack of understanding of what it means to be responsible; it is a lack of insight into what factors are important at different levels of the organization and how they affect outcomes. Finding ways (processes and tools) to bring disparate groups together can be highly effective. […]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Read more: www.weforum.org