In light of recent accidents with autonomous and semi-autonomous vehicles, will people put their trust in artificial intelligence? Missouri S&T researchers are digging for answers. Given the choice of riding in an Uber driven by a human or a self-driving version, which would you choose?

SwissCognitiveConsidering last month’s fatal crash of a self-driving Uber that took the life of a woman in Tempe, Arizona, and the recent death of a test-driver of a semi-autonomous vehicle being developed by Tesla, peoples’ trust in the technology behind autonomous vehicles may also have taken a hit. The reliability of self-driving cars and other forms of artificial intelligence is one of several factors that affect humans’ trust in AI, machine learning and other technological advances, write two Missouri University of Science and Technology researchers in a recent journal article.

“Trust is the cornerstone of humanity’s relationship with artificial intelligence,” write Dr. Keng Siau, professor and chair of business and information technology at Missouri S&T, and Weiyu Wang, a Missouri S&T graduate student in information science and technology. “Like any type of trust, trust in AI takes time to build, seconds to break and forever to repair once it is broken.”

The Uber and Tesla incidents point to the need to rethink the way AI applications such as autonomous driving systems are developed, and for designers and manufacturers of these systems to take certain steps to build greater trust in their products, Siau says.

Despite these recent incidents, Siau sees a strong future for AI, but one fraught with trust issues that must be resolved.

‘A dynamic process’

“Trust building is a dynamic process, involving movement from initial trust to continuous trust development,” Siau and Wang write in “Building Trust in Artificial Intelligence, Machine Learning, and Robotics,” published in the February 2018 issue of Cutter Business Technology Journal.

In their article, Siau and Wang examine prevailing concepts of trust in general and in the context of AI applications and human-computer interaction. They discuss the three types of characteristics that determine trust in this area – human, environment and technology – and outline ways to engender trust in AI applications.

Siau and Wang point to five areas that can help build initial trust in artificial intelligence systems:


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

  • Representation. The more “human” a technology is, the more likely humans are to trust it. “That is why humanoid robots are so popular,” Siau says, adding that it is easier to “establish an emotional connection” with a robot that looks and acts more like a human or a robotic dog that acts more like a canine. Perhaps first-generation autonomous vehicles should have a humanoid “chauffeur” behind the wheel to help ease concerns.
  • Image or perception. Science fiction books and movies have given AI a bad image, Siau says. People tend to think of AI in dystopian terms, as colored by Terminator or Bladerunner movies or Isaac Asimov and Philip K. Dick novels. “This image and perception will affect people’s initial trust in AI,” Siau and Wang write.
  • Reviews from other users. People tend to rely on online product reviews, and “a positive review leads to greater initial trust.”
  • Transparency and “explainability.” When a technology’s inner workings are hidden in a “black box,” that opacity can hinder trust. “To trust AI applications, we need to understand how they are programmed and what function will be performed in certain conditions,” Siau says.
  • Trialability. The ability to test a new AI application before being asked to adapt it leads to greater acceptance, Siau says.

[…]