HealthTech LogTech Pharma Research Retail

Explainable AI: A guide for making black box machine learning models explainable

Machine learning is taking the world by storm, helping automate more and more tasks. As digital transformation expands, the volume and coverage of available data grows, and sets its sights on tasks of increasing complexity, and achieving better accuracy.

SwissCognitiveRobots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety.

But (), which many people conflate with the broader discipline of (), is not without its issues. works by feeding historical real world data to algorithms used to train models. models can then be fed new data and produce results of interest , based on the historical data used to train the model.

A typical example is diagnosing medical conditions. models can be produced using data such as X-rays and CT scans , and then be fed with new data and asked to identify whether a medical condition is present or not. In situations like these, however, getting an outcome is not enough: we need to know the explanation behind it, and this is where it gets tricky.

Explainable

Christoph Molnar is a data scientist and PhD candidate in interpretable . Molnar has written the book “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable” , in which he elaborates on the issue and examines methods for achieving explainability.

Molnar uses the terms interpretable and explainable interchangeably. Notwithstanding the / conflation, this is a good introduction to explainable and how to get there. Well-researched and approachable, the book provides a good overview for experts and non-experts alike. While we summarize findings here, we encourage interested readers to dive in for themselves.

Interpretability can be defined as the degree to which a human can understand the cause of a decision, or the degree to which a human can consistently predict a model’s result. The higher the interpretability of a model, the easier it is to comprehend why certain decisions or predictions have been made.

There is no real consensus about what interpretability is in , nor is it clear how to measure it, notes Molnar. But there is some initial research on this and an attempt to formulate some approaches for evaluation. Three main levels for the evaluation of interpretability have been proposed:

Application level evaluation (real task): Put the explanation into the product and have it tested by the end user. Evaluating fracture detection software with a component for example would involve radiologists testing the software directly to evaluate the model. A good baseline for this is always how good a human would be at explaining the same decision.[…]

read more – copyright by www.zdnet.com

2 Comments

Leave a Reply