Getting started on your Responsible AI journey

Author and Copyright: Anand Rao, Global Artificial Intelligence Lead, PWC, via towardsdatascience.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIn a recent conference on Responsible AI for Social Empowerment (RAISE), held in India, the topic of discussion was on explainable AI. Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.

Interpretability and explainability are closely related topics. Interpretability is at the model level with an objective of understanding the decisions or predictions of the overall model. Explainability is at an individual instance of the model with the objective of understanding why the specific decision or prediction was made by the model. When it comes to explainable AI we need to consider five key questions — Whom to explain? Why explain? When to explain? How to explain? What is the explanation?

Explainable AI

Whom to Explain?

The audience for the explanation or whom to explain should be the first question to answer. Understanding the motivation of the audience, what action or decision the audience is planning to make, their mathematical or technical knowledge and expertise are all important aspects that should be considered during the formulation of the explanation. Based on our experience we propose four main categories of audience –

  • End users: These are consumers who are receiving an explanation on a decision, action or recommendation made by an AI system. The explanation itself might be delivered digitally (e.g., a smartphone app or online application) or through a human (e.g., a loan officer explaining how the loan application of the consumer was denied by the AI system). The primary concern of the end user is on the impact that the decision, action, or recommendation might have on their life.
  • Business sponsors: These are company executives of business or functional units that use AI systems to make decisions, actions, or recommendations that impact other business or functional units or their customers. The business sponsors are concerned both about individual explanations and also broader model interpretability. The primary concern of business executives is the governance process to ensure that the organization is compliant with the regulations and customers are satisfied with the explanations. […]

Read more: towardsdatascience.com