Follow Datanami:
October 13, 2021

ML for Security Is Dead. Long Live ML for Security

(Ryzhi/Shutterstock)

When it comes to staying on top of security threats, machine learning, unquestionably, must be part of the equation. The volume of data is simply too great to cope without it. But as it’s currently being used, ML may be doing more harm than good, particularly when it comes to alarm fatigue.

Alarm fatigue is a condition that occurs when an operator is overloaded with alarms; in many of these cases, the majority of alarms turn out to be false positives. With too many alarms to investigate in a limited amount of time–and the knowledge that most of them are false positives–the operator begins to ignore some alarms, which invariably leads to bad outcomes.

What sort of bad outcomes, you ask? Well, the operator of the underwater oil pipeline that ruptured last week in Orange County reportedly was notified of a problem three hours before it finally shut off the pipeline. Did alarm fatigue play a role in the operator’s control room? An investigation is underway, and we will eventually find out.

Alarm fatigue is a well-documented phenomena in healthcare, which has embraced big data and remote sensing products in recent years. Research shows that 72% to 99% of critical alarms turn out to be false positives, which leads clinicians to ignore them. However, sometimes the alarms are real, which leads to hundreds of deaths in U.S. every year due to complications from alarm fatigue, according to one examination of the data.

The same situation exists in commercial aviation, where modern fly-by-wire jets are outfitted with an abundance of automation. While the auto-pilot does a fine job much of the time, the actual pilots have learned to tune out the many alarms generated by the computers. For example, modern Boeing 777 aircraft have 200 such alerts, which are carefully orchestrated and color-coded to avoid alarm fatigue, according to this Wired article. The complex and non-intuitive interplay of automation, bad sensors, and contradictory alarms tragically overwhelmed pilots in the 2009 crash of Air France  Flight 447, which was an Airbus A330.

Alarm fatigue is a natural product of automation (dencg/Shutterstock)

The stakes are also high in cybersecurity, another discipline that enjoys an abundance of data (or suffers from it, depending on your point of view). If automation is important for operating and maintaining our extensive and increasingly complex application stacks, then it’s absolutely essential for detecting malware and the bad actors who are trying to hide in network traffic.

Large companies today are inundated with requests coming in over the Internet, most of which is good and a small amount of which is bad. Network operators need tools to separate out the wheat from the chafe and keep the network free of threats, whether it’s a traditional heuristics or “fingerprint”-based approach or tools that use more modern ML and AI techniques.

However, spotting the bad traffic and bad actors is much easier said than done. According to Stefan Pracht, a senior vice president of product marketing at the cybersecurity firm Axellio, the current slate of gools inundate operators with false positives.

“Security analysts today are experiencing alarm fatigue,” Pracht said. “In a large corporation, you can get a million alarms on a daily basis. There is no chance that analysts can really get through all that, so they need to make an assessment which of these alarms to really go after.”

Pracht lays much of the blame for this alarm fatigue on ML. Companies have been told by vendors that security solutions that use AI and ML can automatically detect malicious traffic with a minimum of configuration and training of users. But in reality, it doesn’t work that way.

“What we’ve seen with many ML projects is they say ‘Hey, let’s just run your traffic through and find out what anomalous behavior you have on the network,’” Pracht said. “And then you get a software upgrade in Microsoft Office, which suddenly throws off all kinds of different alarm bells because they’re architected slightly on how the application works, and ML just can’t deal with that.”

Security analysts suffer from alarm fatigue (dotshock/Shutterstock)

The fact is, ML-based security solutions simply are not that good at detecting malicious activity out of the box with a minimum of false positives. Extensive customization and training of analysts is required to whittle those false positives down to the point where analysts will actually start believing the alarms.

“ML unfortunately creates as many false positive as it creates real intrusions,” Pracht told Datanami. “If the analyst doesn’t understand how that conclusion was made, that your email was malicious, they then have to dig through the sea of data that they have access to in order to determine why that email was malicious, who was impacted, where it came from, and how far that threat actually progressed within the organization.”

In the end, after conducting a lot of manual work, the ML-based system doesn’t save the security analyst much time at all. What has been marketed as the Holy Grail for threat-detection is actually not much better than what they had before.

“Users are spending a lot of money to implement an algorithm within an environment, but then are not spending the time and money and resources to have the right kind of people to fine tune these algorithms,” Pracht said. “It’s pretty much by trial and error.”

What the ML-for-security market needs, Pracht said, is a much more targeted approach. Instead of throwing an anomaly detection algorithm at the entirety of a company’s network traffic and asking it to flag malicious traffic, the vendor and the customer should seek closer ties between the data scientists developing the AI models and the security analysts who are using them.

“We need the data scientists to improve ML, but we need the expertise and competency that we find in network and application developers to better look at the behavior that their applications generate in the network ,and how we can better extract the information out of it,” Pracht says.. “I think that conversation right now is not happening.”

Better visibility into ML algorithms is part of the solution to build better models (Pdusit/Shutterstock)

It is not an unreasonable claim. It is well known that the number one reason why big data and AI projects fail is that the data is a mess. Why would we expect it to be any different in security?

The solution, as Pracht sees it, is a recommitment for vendors and customers to work together to really get the details right. Instead of big bang implementations that look at all of the network traffic, there should be a concerted effort to focus on smaller segments of the data and  boosting the classification accuracy there.

“That’s really the problem right now,” Pracht said. “We’re leaving that to the vendors, and whatever traffic is thrown at them, and then we’re surprised that we don’t get the results that we expect, besides the fact that the data wasn’t chosen wisely.”

Until the data scientists developing the models that will be implemented in the security solutions actually understand the nuances of the data, then it’s likely that the false positives will continue, and ML for security will continue to wallow in the trough of disillusionment, according to Pracht.

“There’s so much promise, so much hype around ML. ‘Hey this really solves all of your resources and expertise problems,’” he said. “And what companies are finding out it’s not that gold bullet. They actually need to look for more qualified people now in order to understand what that system is doing and how to really take advantage of it.”

Related Items:

Deep Learning Is Our Best Hope for Cybersecurity, Deep Instinct Says

Navigating Data Security Within Data Sharing In Today’s Evolving Landscape

Why Machine Learning Is Our Last Hope for Cybersecurity

 

Datanami