Trust in AI decision-making is the key to smart cities, autonomous vehicles, diagnostics, and more. Once we have formed complete trust in AI, as we do with doctors and lawyers, the real race between man and machine will begin. Cognisant heavy tasks will be offloaded onto the machine and humans will become backup decision-makers of liability.

 

SwissCognitive Guest Blogger: Eleanor Wright, COO at TelWAI – “AI Decision-Making: Can We Trust Artificial Intelligence to Get It Right?”


 

We form trust in AI the same way we form trust in engineers, doctors, and lawyers, it requires years of testing, oversight, and monitoring. Full autonomy isn’t handed over until a level of confidence is achieved and legal qualifiers are satisfied; and with time, comes increased responsibility.

There is, however, a more fundamental level of trust that must be recognized. This level of trust is built on severity and reliability. When it comes to AI, not all data is made the same. Analyzing text and generating informed search functions is very different from detecting an individual entering a public space with a gun. The severity of these functions in our day-to-day living represents opposite levels of importance. Thus, the level of trust we apply to the AI varies and varying qualifiers are applied.

A network of trust, however, can become murky. As we move into the age of AIoT, AI operating at varying severity and reliability levels will become integrated into a wider network of operations. Sensors will no longer be required to make decisions just for themselves, but they will be tasked to make decisions across their network. The surveillance camera will no longer solely tell the operator a threat has been identified, but it will autonomously lock all the doors and limit threat mobility. This level of trust requires the implementation of automated responses, that can save time and lives. By removing human limitations from the decision, a greater level of security is achieved. That is if we trust the system.

Enabling these functions are sensors capable of an array of functions, and within these ecosystems is AI operating at varying levels. Some sensors are brought together to share data and generate decisions at the micro level, such as autonomous vehicles laden with lidar. Other systems, such as C4I systems, manage thousands of sensors detecting and reporting multiple levels of threats to users in one central control room. Although these systems operate independently, their convergence is inevitable. They will co-share data and enable increased intelligence enabling enhanced safety and security.

This convergence will once again amplify the element of trust. Requiring varying levels of legal and moral classifiers to operate, how do governments enable data sharing amongst ecosystems whilst retaining trust? The sharing of data between C4I and autonomous vehicles is clearly beneficial in advancing safety and security, but how is this sharing of data facilitated and regulated?


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

To answer these questions, we will have to explore a long list of requirements built on trust. Trust that the communications between these ecosystems are impenetrable. Trust that the fault tolerance of the AI is regulated to a high enough standard at the micro level, that risks are mitigated once scaled to a macro level. Trust that the AI is built on a network that can handle the data load and that the system architecture is designed to reduce this load. It’s by addressing these verticals of trust within AI, at multiple levels, that we will see AI applied to automated functions of civil society.


About the Author:

Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles.