copyright by securityintelligence.com
Artificial intelligence () is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging , where companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage to defeat evolving attack methods, and recent data suggests that implementation could both boost gross domestic product (GDP) and generate new jobs.
It’s easy to see as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET , however, new increasing business expectations and misleading marketing terminology have generated significant hype around , to the point where 75 percent of IT decision-makers now see as the silver bullet for their security issues.
It’s time for an reality check. What’s the hype, where’s the hope and what does effective implementation really look like? What Are the Current Limitations of Artificial Intelligence?
already has a home in IT security. As noted in the Computer Weekly article, tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of tools that require minimal human interaction to improve network security .
This comes down to the difference between supervised and unsupervised — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.
also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.
“A can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.[…]