Machine learning is taking the world by storm, helping automate more and more tasks. As digital transformation expands, the volume and coverage of available data grows, and machine learning sets its sights on tasks of increasing complexity, and achieving better accuracy.

 


 

SwissCognitiveRobots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety.

But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest , based on the historical data used to train the model.

A typical example is diagnosing medical conditions. ML models can be produced using data such as X-rays and CT scans , and then be fed with new data and asked to identify whether a medical condition is present or not. In situations like these, however, getting an outcome is not enough: we need to know the explanation behind it, and this is where it gets tricky.

Explainable AI

Christoph Molnar is a data scientist and PhD candidate in interpretable machine learning. Molnar has written the book “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable” , in which he elaborates on the issue and examines methods for achieving explainability.

Molnar uses the terms interpretable and explainable interchangeably. Notwithstanding the AI/ML conflation, this is a good introduction to explainable AI and how to get there. Well-researched and approachable, the book provides a good overview for experts and non-experts alike. While we summarize findings here, we encourage interested readers to dive in for themselves.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Interpretability can be defined as the degree to which a human can understand the cause of a decision, or the degree to which a human can consistently predict a ML model’s result. The higher the interpretability of a model, the easier it is to comprehend why certain decisions or predictions have been made.

There is no real consensus about what interpretability is in ML, nor is it clear how to measure it, notes Molnar. But there is some initial research on this and an attempt to formulate some approaches for evaluation. Three main levels for the evaluation of interpretability have been proposed:

Application level evaluation (real task): Put the explanation into the product and have it tested by the end user. Evaluating fracture detection software with a ML component for example would involve radiologists testing the software directly to evaluate the model. A good baseline for this is always how good a human would be at explaining the same decision.[…]

read more – copyright by www.zdnet.com