Perhaps the greatest impediment to widespread AI adoption today is the disillusionment being faced by companies that fail to achieve desired results with AI software they purchase.

Copyright by Malay Upadhyay

 

SwissCognitiveIn most cases, their own readiness in terms of data and process is yet to mature. However, an equally big problem is their inability to distinguish truly strong and effective AI solutions from solutions that just flaunt the ‘AI’ label for promotional purposes.

So while it is important for all of us to discuss what AI can or cannot do, we should also think of standards that govern our currently available day-to-day AI solutions and how to make them easily understandable to the public. It would allow effective, responsible and careful adoption of current AI solutions by users, and will require an official certification or rating, just the way movies today are rated R, PG etc. or a food product is rated on its level of spice.

One way could be to identify some of the most critical parameters to look for in any AI solution, and to rate/label them on a standard scale. Few such parameters are discussed below. Perhaps the community and policymakers can crystallize these further, and add to the list.

8. Depth of AI

There are many tools and techniques that belong to the AI domain. Decision trees, Random forest, Gradient boosting, Monte Carlo, to name a few. The use of any one of these (say, Regression) in a solution can technically qualify it as AI-enabled, but it would not be very accurate or useful for a user. This has led to disillusionment among early AI users, while also giving rise to plethora of solutions and companies calling themselves AI. Since most users or policymakers would have difficulty understanding the rigor of AI simply by looking at the methods used, perhaps we can classify these techniques and assign a rating for the type/number of tools used. It will reflect how rigorous/deep the underlying AI is in any given solution.

2. Explainability of AI

A very important factor in limiting Black Box AI solutions. Consider the case of Mount Sinai Hospital that employed the AI solution Deep Patient to predict cases of schizophrenia, which is otherwise a notoriously difficult thing to do for doctors. While Deep Patient could indeed do it more accurately, the problem was that doctors had no clue why/how, and had to blindly trust the AI. The idea here is that if an AI solution makes a prediction or decision, it should also be able to explain the rationale behind it. Sooner or later, we will have ethical laws in place to ensure this. For now, we can at least have a rating of the degree of transparency an AI solution has, based on how well users can see/understand the reasoning behind its predictions.

3. Type of AI

An AI solution performs one or more of the following three broad tasks: sense, think, respond. If we go in more detail, we have predictive analytics, chatbots, virtual agents, data visualization, speech/facial/social analytics, etc. that different AI solutions are designed to perform. A proper dictionary of all these function types and their explanation would help label each AI solution on the kind of functions it performs.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

4. Support needed for AI

Most organizations adopting AI solutions today are learning this the hard way: deriving value from an AI solution requires data cleanup, appropriate connectors, employee training, habitual change and culture shift among employees, standard operating and measuring procedures in place, and a clear use case(s). If we wish for successful adoption of AI, clear messaging is needed on the support it needs to be successfully implemented by organizations in order to be used to maximum utility.

5. Usage conditions for AI

Any AI solution requires certain minimum amount of data, can be used for specific use cases, and is usually effective in only certain situations for even those use cases. Such conditions/criteria should accompany any AI solution for potential users to clearly understand and qualify if an AI solution will be useful to them.

6. Biases with AI

AI algorithms are trained on datasets that may carry inherent biases. All such biases that the developer can think of should be listed with the solution. Much as in legal or medicine fields, these should also be added to a universal list of known and unknown biases that is iteratively built so that future developers can refer to it to test their own solutions for biases seen historically.

7. Job-loss risk of AI

Since the biggest worry today is the potential of some AI solutions to replace jobs, I wonder if there is a way an estimate could be laid out (i.e. number of employees who’d lose their job without being reassigned, and the number of years over which it will happen), represented by a “Risk”-rating (say, in green, yellow and red colour codes) which can be applied to loosely indicate the potential cost to current workers.

8. Kindness of AI

So far mostly overlooked, how kind an AI solution is becomes important when we consider the tricky cases of autonomous cars making decisions with human lives when on road. But how do we define kindness? A great challenge was underway back in 2018, calling out for ways to teach AI to be kind, just the way we teach children to be – which is not so far-fetched a comparison if you think about it!

With GDPR, we have shown our collective capability to put effective measures and policies in place to ensure an ethical and sustainable solution to modern progress. Let’s ensure the same with AI.