FAGMA GovTech Research

Top 6 Ways Developers Can Validate Artificial Intelligence Systems

Top 6 Ways Developers Can Validate Artificial Intelligence Systems

In 2017, Facebook Artificial Intelligence Research (FAIR) pulled the plug from an project when a pair of chatbots started communicating in an unknown language.

SwissCognitiveIn 2017, Facebook Artificial Intelligence Research (FAIR) pulled the plug from an project when a pair of chatbots started communicating in an unknown language. Researchers were baffled at the machine’s ability to invent a language and immediately stalled the project fearing the uncertainties revolving the outcome of their development.

Such incidents, though a few in number, cannot be taken lightly as we move closer to a more machine-dependent world. The question that authorities and even governments institutions need to ask is what is the extent of trust they can place on the technology.

Since the role of and in our day to- day life is completely unavoidable, what we need is a mechanism to optimise both human and outcomes for stronger results. So, here are some of the best practices as suggested by experts in the field to authenticate the machine’s actions.

Statistical Method : In a recent study, published in Molecular Informatics , the researchers used the statistical equation to validate programme’s ability and even find the answer to the question “What is the probability of achieving accuracy greater than 90%?” through an system.

can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an ’s reliability,” he said describing the conclusion of his study.

can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an ’s reliability,” he said describing the conclusion of his study.

Holdout Method: It is considered to be the easiest model for evaluation technique. For this, a given label data set is taken and divided into test and training sets. “Then, we fit a model to the training data and predict the labels of the test set. And the fraction of correct predictions constitutes our estimate of the prediction accuracy — we withhold the known test labels during prediction, of course. We really don’t want to train and evaluate our model on the same training dataset (this is called resubstitution evaluation), since it would introduce a very optimistic bias due to overfitting,” says the researchers in their paper titled, Model Evaluation, Model Selection, And Algorithm Selection In Machine Learning.[…]

read more – copyright by www.analyticsindiamag.com

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.