As AI technology matures, it’s being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial machine learning and data poisoning to hack our AI systems.

copyright by www.datanami.com 

SwissCognitiveA military drone misidentifies enemy tanks as friendlies. A self-driving car swerves into oncoming traffic. An NLP bot gives an erroneous summary of an intercepted wire. These are examples of how AI systems can be hacked, which is an area of increased focus for government and industrial leaders alike.

As AI technology matures, it’s being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial machine learning and data poisoning to hack our AI systems.

What’s concerning is how easy it can be to hack AI. According to Arash Rahnama, Phd., the head of applied AI research at Modzy and a senior lead data scientist at Booz Allen Hamilton , AI models can be hacked by inserting a few tactically inserted pixels (for a computer vision algorithm) or some innocuous looking typos (for a natural language processing model) into the training set. Any algorithm, including neural networks and more traditional approaches like regression algorithms, is susceptible, he says.

“Let’s say you have a model you’ve trained on data sets. It’s classifying pictures of cats and dogs,” Rahnama says. “People have figured out ways of changing a couple of pixels in the input image, so now the network image is misled into classifying an image of a cat into the dog category.”

Unfortunately, these attacks are not detectable through traditional methods, he says. “The image still looks the same to our eyes,” Rahnama tells Datanami . “But somehow it looks vastly different to the AI model itself.” A Tesla Model S thought this was 85 mile-an-hour speed limit sign, according to researchers at McAfee Real-World Impact

Real-World Impact

The ramifications for mistaking a dog for a cat are small. But the same technique has been shown to work in other areas, such as using surreptitiously placed stickers to trick the Autopilot feature of Tesla Model S into driving into on-coming traffic, or tricking a self-driving car into mistaking a stop sign for a 45 mile-per-hour speed limit sign.

“It’s a big problem,” UC Berkeley professor Dawn Song, an expert on adversarial AI who has worked with Google to bolster its Auto-Complete function, said last year at an MIT Technology Review event. “We need to come together to fix it.”


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

That is starting to happen. In 2019, DARPA launched its Guaranteeing AI Robustness against Deception (GARD) program, which seeks to build the technological underpinnings to identify vulnerabilities, bolster AI robustness, and build defensiveness mechanisms that are resilient to AI hacks.

There is a critical need for ML defense, says Hava Siegelmann, the program manager in DARPA’s Information Innovation Office (I2O).

“The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level,” he stated in 2019. “We must ensure ML is safe and incapable of being deceived.”

Resilient AI

There are various open source approaches to making AI models more resilient to attacks. One method is to create your own adversarial data sets and train your model on that, which enables the model to classify adversarial data in the real world.

Rahnama is spearheading Modzy’s offering in adversarial AI and explainable AI, which are two heads of the same coin. His efforts so far have yielded two proprietary offerings. […]

read more www.datanami.com