EduTech Research Solutions

Trust In Artificial Intelligence, But Not Blindly

Trust In Artificial Intelligence, But Not Blindly

“Interaction and understandability are thus crucially important for building trust in systems that learn from data”. Surprisingly, the links between interaction, explanation and building trust have largely been ignored in research – until now.

read more – copyright by www.eurasiareview.com 

SwissCognitiveImagine the following situation: A company wants to teach an () to recognise a horse on photos. To this end, it uses several thousand images of horses to train the until it is able to reliably identify the animal even on unknown images.

The learns quickly – it is not clear to the company how it is making its decisions but this is not really an issue for the company. It is simply impressed by how reliably the process works.

However, a suspicious person then discovers that copyright information with a link to a website on horses is printed in the bottom right corner of the photos. The has made things relatively easy for itself and has learnt to recognise the horse based only on this copyright notice.

Researchers talk in these cases about confounders – which are confounding factors that should actually have nothing to do with the identification process. In these cases, the process will work as long as the continues to receive other comparable photos. If the copyright notice is missing, the is left high and dry.

Various studies carried out over the last few years have studied and shown how to uncover this undesired decision making of systems, even when using very large datasets for training the system. They have become known as Clever Hans moments of – named after a horse who at the beginning of the last century was supposed to be able to solve simple arithmetic sums but was in fact only able to find the right answer by “reading” the body language of the questioner.

with a Clever Hans moment learns to draw the right conclusions for the wrong reasons”, says Kristian Kersting, Professor of Artificial Intelligence and Machine Learning in the Department of Computer Science at TU Darmstadt and a member of its Centre for Science. This is a problem that all researchers are potentially confronted with and makes clear why the call for “explainable ” has become louder in recent years.

Kersting understands the consequences of this problem: “Eliminating Clever Hans moments is one of the most important steps towards a practical application and dissemination of , in particular in scientific and in safety-critical areas.”

The researcher and his team have already been developing solutions for a number of years that can determine the resistance of a plant to parasites or detect an infestation at an early stage – even before it can be perceived by the human eye. However, the prerequisite for the success of such an application is that the system is right for the right scientific reasons so that the domain experts actually trust the . If it is not possible to generate trust, the experts will turn away from the – and will thus miss the opportunity to use to create resistant plants in times of global warming. […]

9 Comments

Leave a Reply