AI is making a dent in crime fighting

Last November, detectives investigating a murder case in Bentonville, Arkansas, accessed utility data from a smart meter to determine that 140 gallons of water had been used at the victim’s home between 1 a.m. and 3 a.m. It was more water than had been used at the home before, and it was used at a suspicious time—evidence that the patio area had been sprayed down to conceal the murder scene.

Cyber security

As technology advances, we have more detailed data and analytics at our fingertips than ever before. It can potentially offer new insights for crime investigators. One area crying out for more insight is cyber security. By 2020, 60 percent of digital businesses will suffer a major service failure due to the inability of IT security teams to manage digital risk, according to Gartner . If we pair all this new Internet of Things (IoT) data with artificial intelligence (AI) and machine learning, there’s scope to turn the tide in the fight against cyber criminals. We’re not just talking about identifying vulnerabilities, risks and cyber crimes, but also automatically combatting them.

Automated threat detection and mitigationSwissCognitive Logo

Security professionals face a difficult task in keeping enterprise networks safe. They must uncover vulnerabilities in a continuously growing and increasingly complex landscape of devices and software. When data breaches do occur, they must identify them, limit the damage and track those responsible. Investigations take time, and false positives are all too common. What if AI platforms or cognitive security solutions could be employed to cut through the noise? Researchers from MIT were able to create a virtual AI analyst that successfully predicted 85 percent of cyber attacks by incorporating input from human experts. Not only is that three times better than most current, rules-based systems, but it also reduced the number of false positives by a factor of five. The secret sauce here is that the system is constantly learning. Every time a human analyst identifies a false positive or a genuine threat, the system adjusts to accommodate that feedback and creates new models to detect threats. The more feedback it gets, the more accurate it becomes. Not only does this improve threat detection, but it also frees up human analysts to investigate the complex cases that really require their attention. If they’re not bogged down in false positives, it’s possible to make better use of their expertise […]