Attractive to both white-hats and cybercriminals, AI’s role in security has yet to find an equilibrium between the two sides.
copyright by threatpost.com
Artificial intelligence is the new golden ring for cybersecurity developers, thanks to its potential to not just automate functions at scale but also to make contextual decisions based on what it learns over time. This can have big implications for security personnel—all too often, companies simply don’t have the resources to search through the haystack of anomalies for the proverbial malicious needle.
For instance, if a worker normally based in New York suddenly one morning logs in from Pittsburgh, that’s an anomaly — and the AI can tell that’s an anomaly because it has learned to expect that user to be logging in from New York. Similarly, if a log-in in Pittsburgh is followed within a few minutes of another log-in by the same user from, say, California, that’s likely a malicious red flag.
So, at its simplest level, AI and “machine learning” is oriented around understanding behavioral norms. The system takes some time to observe the environment to see what normal behavior is and establish a baseline—so that it can pick up on deviations from the norm by applying algorithmic knowledge to a data set.
AI for security can help defenders in a myriad of ways. However, there are also downsides to the emergence of AI. For one, the technology has also been leveraged by cybercriminals, and it’s clear that it can be co-opted for various nefarious tasks. These have include at-scale scanning for open, and vulnerable ports – or automated composition of emails that have the exact tone and voice of the company’s CEO, learned over time by 24-7 eavesdropping .
And in the not-too-distant future, that automatic mimicking could even extend to voice. IBM scientists for instance have created a way for AI systems to analyze, interpret and mirror users’ unique speech and linguistic traits – in theory to make it easier for humans to talk to their technology. However, the potential for using this type of capability for malicious spoofing applications is obvious.
And meanwhile, the zeal for adopting AI across vertical markets – for cybersecurity and beyond – has opened up a rapidly proliferating new attack surface—one that doesn’t always feature built-in security-by-design. AI has the capacity to revolutionize any number of industries: offering smarter recommendations to online shoppers, speeding along manufacturing processes with automatic quality checks or even tracking wildfire risk and monitoring, as researchers at the University of Alberta in Canada are doing [for more on this, please see the sidebar for this story]. […]