In 2017, Facebook Artificial Intelligence Research (FAIR) pulled the plug from an AI project when a pair of chatbots started communicating in an unknown language.

SwissCognitiveIn 2017, Facebook Artificial Intelligence Research (FAIR) pulled the plug from an AI project when a pair of chatbots started communicating in an unknown language. Researchers were baffled at the machine’s ability to invent a language and immediately stalled the project fearing the uncertainties revolving the outcome of their development.

Such incidents, though a few in number, cannot be taken lightly as we move closer to a more machine-dependent world. The question that authorities and even governments institutions need to ask is what is the extent of trust they can place on the technology.

Since the role of AI and ML in our day to- day life is completely unavoidable, what we need is a mechanism to optimise both human and AI outcomes for stronger results. So, here are some of the best practices as suggested by experts in the field to authenticate the machine’s actions.

Statistical Method : In a recent study, published in Molecular Informatics , the researchers used the statistical equation to validate AI programme’s ability and even find the answer to the question “What is the probability of achieving accuracy greater than 90%?” through an AI system.

“AI can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an AI’s reliability,” he said describing the conclusion of his study.

“AI can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an AI’s reliability,” he said describing the conclusion of his study.

Holdout Method: It is considered to be the easiest model for evaluation technique. For this, a given label data set is taken and divided into test and training sets. “Then, we fit a model to the training data and predict the labels of the test set. And the fraction of correct predictions constitutes our estimate of the prediction accuracy — we withhold the known test labels during prediction, of course. We really don’t want to train and evaluate our model on the same training dataset (this is called resubstitution evaluation), since it would introduce a very optimistic bias due to overfitting,” says the researchers in their paper titled, Model Evaluation, Model Selection, And Algorithm Selection In Machine Learning.[…]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

read more – copyright by www.analyticsindiamag.com