As society strives to master artificial intelligence, it is recognizing the need for explainable AI. This emerging trend will force organizations to create models that are effective and good for society

 

copyright by tdwi.org

SwissCognitiveBusinesses today are spending billions pursuing artificial intelligence. Their end goal is to develop thinking machines that will help them run their operations more effectively, increase revenue, and achieve their organizational goals. They are engaging data scientists to obtain the right data from multiple sources and generate models that, when paired with their data, enable the business to execute large quantities of decisions effectively and efficiently without significant human intervention.

Until recently, the success of an AI project was judged only by its outcomes for the company, but an emerging industry trend suggests another goal — explainable artificial intelligence (XAI). The gravitation toward XAI stems from demand from consumers (and ultimately society) to better understand how AI decisions are made. Regulations, such as the General Data Protection Regulation (GDPR) in Europe, have increased the demand for more accountability when AI is used to make automated decisions, especially in cases where bias has a detrimental effect on individuals.

What Is a Model?

The first step in understanding how to achieve XAI is to understand what a model is and how it works.

Simply stated, a model is a set of transformations that convert raw data into information, most often by applying statistics and advanced mathematical constructs such as calculus and linear algebra. What makes AI models different from traditional data transformations is that the model is constructed by employing algorithms to expose patterns from historical data; those patterns form the basis for the mathematical transformation.

Traditional data transformations are most often a set of directives and rules established and programmed by a developer to achieve a specific purpose. Because AI models learn from having more data, they can be regenerated periodically to sense and adjust to changes in the underlying behaviors associated with the transformation.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

One of the strengths of AI is that the process of creating a model can identify patterns that are not obvious and intuitive by looking at the data. This is also one of its weaknesses because AI is often viewed as a black box that creates results without explaining what is happening within the model. […]

read more – copyright by tdwi.org