“Interaction and understandability are thus crucially important for building trust in AI systems that learn from data”. Surprisingly, the links between interaction, explanation and building trust have largely been ignored in research – until now.
read more – copyright by www.eurasiareview.com
Imagine the following situation: A company wants to teach an artificial intelligence (AI) to recognise a horse on photos. To this end, it uses several thousand images of horses to train the AI until it is able to reliably identify the animal even on unknown images.
The AI learns quickly – it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works.
However, a suspicious person then discovers that copyright information with a link to a website on horses is printed in the bottom right corner of the photos. The AI has made things relatively easy for itself and has learnt to recognise the horse based only on this copyright notice.
Researchers talk in these cases about confounders – which are confounding factors that should actually have nothing to do with the identification process. In these cases, the process will work as long as the AI continues to receive other comparable photos. If the copyright notice is missing, the AI is left high and dry.
Various studies carried out over the last few years have studied and shown how to uncover this undesired decision making of AI systems, even when using very large datasets for training the system. They have become known as Clever Hans moments of AI – named after a horse who at the beginning of the last century was supposed to be able to solve simple arithmetic sums but was in fact only able to find the right answer by “reading” the body language of the questioner.
“AI with a Clever Hans moment learns to draw the right conclusions for the wrong reasons”, says Kristian Kersting, Professor of Artificial Intelligence and Machine Learning in the Department of Computer Science at TU Darmstadt and a member of its Centre for Cognitive Science. This is a problem that all AI researchers are potentially confronted with and makes clear why the call for “explainable AI” has become louder in recent years.
Kersting understands the consequences of this problem: “Eliminating Clever Hans moments is one of the most important steps towards a practical application and dissemination of AI, in particular in scientific and in safety-critical areas.”
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
The researcher and his team have already been developing AI solutions for a number of years that can determine the resistance of a plant to parasites or detect an infestation at an early stage – even before it can be perceived by the human eye. However, the prerequisite for the success of such an application is that the AI system is right for the right scientific reasons so that the domain experts actually trust the AI. If it is not possible to generate trust, the experts will turn away from the AI – and will thus miss the opportunity to use AI to create resistant plants in times of global warming. […]
“Interaction and understandability are thus crucially important for building trust in AI systems that learn from data”. Surprisingly, the links between interaction, explanation and building trust have largely been ignored in research – until now.
read more – copyright by www.eurasiareview.com
Imagine the following situation: A company wants to teach an artificial intelligence (AI) to recognise a horse on photos. To this end, it uses several thousand images of horses to train the AI until it is able to reliably identify the animal even on unknown images.
The AI learns quickly – it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works.
However, a suspicious person then discovers that copyright information with a link to a website on horses is printed in the bottom right corner of the photos. The AI has made things relatively easy for itself and has learnt to recognise the horse based only on this copyright notice.
Researchers talk in these cases about confounders – which are confounding factors that should actually have nothing to do with the identification process. In these cases, the process will work as long as the AI continues to receive other comparable photos. If the copyright notice is missing, the AI is left high and dry.
Various studies carried out over the last few years have studied and shown how to uncover this undesired decision making of AI systems, even when using very large datasets for training the system. They have become known as Clever Hans moments of AI – named after a horse who at the beginning of the last century was supposed to be able to solve simple arithmetic sums but was in fact only able to find the right answer by “reading” the body language of the questioner.
“AI with a Clever Hans moment learns to draw the right conclusions for the wrong reasons”, says Kristian Kersting, Professor of Artificial Intelligence and Machine Learning in the Department of Computer Science at TU Darmstadt and a member of its Centre for Cognitive Science. This is a problem that all AI researchers are potentially confronted with and makes clear why the call for “explainable AI” has become louder in recent years.
Kersting understands the consequences of this problem: “Eliminating Clever Hans moments is one of the most important steps towards a practical application and dissemination of AI, in particular in scientific and in safety-critical areas.”
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
The researcher and his team have already been developing AI solutions for a number of years that can determine the resistance of a plant to parasites or detect an infestation at an early stage – even before it can be perceived by the human eye. However, the prerequisite for the success of such an application is that the AI system is right for the right scientific reasons so that the domain experts actually trust the AI. If it is not possible to generate trust, the experts will turn away from the AI – and will thus miss the opportunity to use AI to create resistant plants in times of global warming. […]
Share this: