Drone FAGMA GovTech Research UAS

Hacking AI: Exposing Vulnerabilities in Machine Learning

Hacking AI: Exposing Vulnerabilities in Machine Learning

As technology matures, it’s being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial and data poisoning to hack our systems.

copyright by www.datanami.com 

SwissCognitiveA military drone misidentifies enemy tanks as friendlies. A swerves into oncoming traffic. An bot gives an erroneous summary of an intercepted wire. These are examples of how systems can be hacked, which is an area of increased focus for government and industrial leaders alike.

As technology matures, it’s being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial and data poisoning to hack our systems.

What’s concerning is how easy it can be to hack . According to Arash Rahnama, Phd., the head of applied research at Modzy and a senior lead data scientist at Booz Allen Hamilton , models can be hacked by inserting a few tactically inserted pixels (for a computer vision algorithm) or some innocuous looking typos (for a model) into the training set. Any algorithm, including neural networks and more traditional approaches like regression algorithms, is susceptible, he says.

“Let’s say you have a model you’ve trained on data sets. It’s classifying pictures of cats and dogs,” Rahnama says. “People have figured out ways of changing a couple of pixels in the input image, so now the network image is misled into classifying an image of a cat into the dog category.”

Unfortunately, these attacks are not detectable through traditional methods, he says. “The image still looks the same to our eyes,” Rahnama tells Datanami . “But somehow it looks vastly different to the model itself.” A Tesla Model S thought this was 85 mile-an-hour speed limit sign, according to researchers at McAfee Real-World Impact

Real-World Impact

The ramifications for mistaking a dog for a cat are small. But the same technique has been shown to work in other areas, such as using surreptitiously placed stickers to trick the Autopilot feature of Tesla Model S into driving into on-coming traffic, or tricking a into mistaking a stop sign for a 45 mile-per-hour speed limit sign.

“It’s a big problem,” UC Berkeley professor Dawn Song, an expert on adversarial who has worked with Google to bolster its Auto-Complete function, said last year at an MIT Technology Review event. “We need to come together to fix it.”

That is starting to happen. In 2019, DARPA launched its Guaranteeing Robustness against Deception (GARD) program, which seeks to build the technological underpinnings to identify vulnerabilities, bolster robustness, and build defensiveness mechanisms that are resilient to hacks.

There is a critical need for defense, says Hava Siegelmann, the program manager in DARPA’s Information Innovation Office (I2O).

“The GARD program seeks to prevent the chaos that could ensue in the near future when attack methodologies, now in their infancy, have matured to a more destructive level,” he stated in 2019. “We must ensure is safe and incapable of being deceived.”

Resilient

There are various open source approaches to making models more resilient to attacks. One method is to create your own adversarial data sets and train your model on that, which enables the model to classify adversarial data in the real world.

Rahnama is spearheading Modzy’s offering in adversarial and explainable , which are two heads of the same coin. His efforts so far have yielded two proprietary offerings. […]

read more www.datanami.com 

0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.