copyright by securityintelligence.com

SwissCognitiveArtificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging , where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs.

It’s easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET , however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.

It’s time for an artificial intelligence reality check. What’s the hype, where’s the hope and what does effective implementation really look like? What Are the Current Limitations of Artificial Intelligence?

AI already has a home in IT security. As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. What’s catching the attention of chief information security officers (CISOs) and chief information officers (CIOs) right now, however, is the prospect of AI tools that require minimal human interaction to improve network security .

This comes down to the difference between supervised and unsupervised machine learning — current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems, it’s possible for AI tools to capture and report basic system data, but beyond their scope to design intelligent threat response plans of the silver-bullet variety.

AI also has basic limitations that may be inviolate or may require a new research approach to solve. This is largely tied to experience: As noted by Pedro Domingos, professor of computer science at the University of Washington and author of “The Master Algorithm,” machines don’t learn from experience the same way humans do.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

“A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch,” said Domingos, as reported by Wired.[…]

read more – copyright by securityintelligence.com