Hollywood images of artificial intelligence – and the iconic familiars of HAL 9000, the Terminator and Her’s Samantha – have shaped the public perception of artificial intelligence (AI) as a vessel for human-like interaction. Yet with AI’s resurgence applying the technology to all manner of business problems, security specialists are rapidly warming to its potential as a fastidious assistant that works tirelessly to pick eyedroppers of insight from raging rivers of information.
read more – copyright by www.cso.com.au
This learning process has evolved from the refinement of big-data techniques feeding a surfeit of rich data sets to ever more sophisticated machine-learning solutions. Automated security systems now apply AI techniques to massive databases of security logs, building baseline behavioural models for different days and times of the week; if particular activity strays too far from this norm, it can be instantly flagged, investigated, and actioned in real time.
Fighting off the alerts deluge
As security practitioners are well aware, the flood of security alerts has become a logistical nightmare. Figures in Cisco’s recent 2017 Annual Cybersecurity Report (ACR) suggest that 44 percent of security operations managers see more than 5000 security alerts per day. The average organisation can only investigate 56 percent of daily security alerts – 28 percent of which are ultimately held to be legitimate. Little wonder that AI and machine-learning systems are become beacons of hope for CSOs drowning in security alerts. And the shift towards cloud computing – which substantially increases the number of logged Internet requests – has exacerbated the need. Just 1 in 5000 user activities associated with connected third-party cloud applications, the Cisco analysis found, is suspicious.
Finding the needle in a haystack
“The challenge for security teams,” the report’s authors note, “is pinpointing that one instance… Only with automation can security teams cut through the ‘noise’ of security alerts and focus their resources on investigating true threats. The multistage process of identifying normal and potentially suspicious user activities… hinges on the use of automation, with algorithms applied at every stage.” The scope of the problem becomes clear when considering the volumes of attacks currently traversing the Internet. Security vendor Trend Micro, for one, reports blocking 81.9 billion threats through its Smart Protection Network in 2016 alone – a 56 percent increase compared with the previous year – and that’s just from one of dozens of vendors that are actively dealing with customers’ security risks using their cloud-based detection services. […]
read more – copyright by www.cso.com.au
Hollywood images of artificial intelligence – and the iconic familiars of HAL 9000, the Terminator and Her’s Samantha – have shaped the public perception of artificial intelligence (AI) as a vessel for human-like interaction. Yet with AI’s resurgence applying the technology to all manner of business problems, security specialists are rapidly warming to its potential as a fastidious assistant that works tirelessly to pick eyedroppers of insight from raging rivers of information.
read more – copyright by www.cso.com.au
This learning process has evolved from the refinement of big-data techniques feeding a surfeit of rich data sets to ever more sophisticated machine-learning solutions. Automated security systems now apply AI techniques to massive databases of security logs, building baseline behavioural models for different days and times of the week; if particular activity strays too far from this norm, it can be instantly flagged, investigated, and actioned in real time.
Fighting off the alerts deluge
As security practitioners are well aware, the flood of security alerts has become a logistical nightmare. Figures in Cisco’s recent 2017 Annual Cybersecurity Report (ACR) suggest that 44 percent of security operations managers see more than 5000 security alerts per day. The average organisation can only investigate 56 percent of daily security alerts – 28 percent of which are ultimately held to be legitimate. Little wonder that AI and machine-learning systems are become beacons of hope for CSOs drowning in security alerts. And the shift towards cloud computing – which substantially increases the number of logged Internet requests – has exacerbated the need. Just 1 in 5000 user activities associated with connected third-party cloud applications, the Cisco analysis found, is suspicious.
Finding the needle in a haystack
“The challenge for security teams,” the report’s authors note, “is pinpointing that one instance… Only with automation can security teams cut through the ‘noise’ of security alerts and focus their resources on investigating true threats. The multistage process of identifying normal and potentially suspicious user activities… hinges on the use of automation, with algorithms applied at every stage.” The scope of the problem becomes clear when considering the volumes of attacks currently traversing the Internet. Security vendor Trend Micro, for one, reports blocking 81.9 billion threats through its Smart Protection Network in 2016 alone – a 56 percent increase compared with the previous year – and that’s just from one of dozens of vendors that are actively dealing with customers’ security risks using their cloud-based detection services. […]
read more – copyright by www.cso.com.au
Share this: