Deep neural networks are one of the most fundamental aspects of artificial intelligence (AI), as they are used to process images and data through mathematical modeling. They are responsible for some of the greatest advancements in the field, but they also malfunction in various ways.
Copyright by www.unite.ai
These malfunctions can have either a small to non-existent impact, such as a simple misidentification, to a more dramatic and deadly one, such as a self-driving malfunction.New research coming out of the University of Houston suggests that our common assumptions on these malfunctions may be wrong, which could help evaluate the reliability of the networks in the future.
The paper was published in Nature Machine Intelligence in November. “Adversarial Examples”
“Adversarial Examples”
Machine learning and other types of AI are crucial in many sectors and tasks, such as banking and cybersecurity systems. According to Cameron Buckner, an associate professor of philosophy at UH, there must be an understanding of the failures brought on by “adversarial examples.” These adversarial examples occur when a deep neural network system misjudges images and other data when it comes across information outside the training inputs that were used to develop the network.
The adversarial examples are rare since many times they are created or discovered by another machine learning network.
“Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner wrote.
Buckner is saying that the malfunction could be caused by the interaction between the actual patterns involved and what the network sets out to process, meaning it is not a complete mistake.
Patterns as Artifacts
“Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner said. “Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Although it is not the case all of the time, intentional malfeasance is the highest risk regarding these adversarial events causing machine learning malfunctions.
“It means malicious actors could fool systems that rely on an otherwise reliable network,” Buckner said. “That has security applications.”
This could be hackers breaching a security system based upon facial recognition technology, or mislabeled traffic signs to confuse autonomous vehicles.
Other previous research has demonstrated that some of the adversarial examples are naturally occurring, taking place when a machine learning system misinterprets data through an unanticipated interaction, which is different than through errors in the data. These naturally occurring examples are rare, and the only current way to discover them is through AI.
However, Buckner says that researchers need to rethink the ways in which they address anomalies.
[…]
Read more: www.unite.ai
Deep neural networks are one of the most fundamental aspects of artificial intelligence (AI), as they are used to process images and data through mathematical modeling. They are responsible for some of the greatest advancements in the field, but they also malfunction in various ways.
Copyright by www.unite.ai
These malfunctions can have either a small to non-existent impact, such as a simple misidentification, to a more dramatic and deadly one, such as a self-driving malfunction.New research coming out of the University of Houston suggests that our common assumptions on these malfunctions may be wrong, which could help evaluate the reliability of the networks in the future.
The paper was published in Nature Machine Intelligence in November. “Adversarial Examples”
“Adversarial Examples”
Machine learning and other types of AI are crucial in many sectors and tasks, such as banking and cybersecurity systems. According to Cameron Buckner, an associate professor of philosophy at UH, there must be an understanding of the failures brought on by “adversarial examples.” These adversarial examples occur when a deep neural network system misjudges images and other data when it comes across information outside the training inputs that were used to develop the network.
The adversarial examples are rare since many times they are created or discovered by another machine learning network.
“Some of these adversarial events could instead be artifacts, and we need to better know what they are in order to know how reliable these networks are,” Buckner wrote.
Buckner is saying that the malfunction could be caused by the interaction between the actual patterns involved and what the network sets out to process, meaning it is not a complete mistake.
Patterns as Artifacts
“Understanding the implications of adversarial examples requires exploring a third possibility: that at least some of these patterns are artifacts,” Buckner said. “Thus, there are presently both costs in simply discarding these patterns and dangers in using them naively.”
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Although it is not the case all of the time, intentional malfeasance is the highest risk regarding these adversarial events causing machine learning malfunctions.
“It means malicious actors could fool systems that rely on an otherwise reliable network,” Buckner said. “That has security applications.”
This could be hackers breaching a security system based upon facial recognition technology, or mislabeled traffic signs to confuse autonomous vehicles.
Other previous research has demonstrated that some of the adversarial examples are naturally occurring, taking place when a machine learning system misinterprets data through an unanticipated interaction, which is different than through errors in the data. These naturally occurring examples are rare, and the only current way to discover them is through AI.
However, Buckner says that researchers need to rethink the ways in which they address anomalies.
[…]
Read more: www.unite.ai
Share this: