web analytics

What is anomaly detection? Behavior-based analysis for cyber threats – Source: www.csoonline.com

Rate this post

Source: www.csoonline.com – Author:

Anomaly detection can be powerful in spotting cyber incidents, but experts say CISOs should balance traditional signature-based detection with more bespoke methods that can identify malicious activity based on outlier signals.

Anomaly detection is an analytic process for identifying points of data or events that deviate significantly from established patterns of behavior. In cybersecurity, anomaly detection is one of the top defensive skills organizations should consider fine-tuning to ensure they can detect and remedy adverse cyber events quickly before they take root and proliferate.

The concept of anomaly detection in cybersecurity was introduced by mathematician Dorothy Denning — who also pioneered the idea of encryption lattices — in a landmark 1987 paper entitled “An Intrusion-Detection Model.” Since then, infosec practitioners and cybersecurity vendors have incorporated Denning’s concepts into their defense techniques, practices, and products.

“Anomaly detection is the holy grail of cyber detection where, if you do it right, you don’t need to know a priori the bad thing that you’re looking for,” Bruce Potter, CEO and founder of Turngate, tells CSO. “It’ll just show up because it doesn’t look like anything else or doesn’t look like it’s supposed to. People have been tilting at that windmill for a long time, since the 1980s, trying to figure out what normal is so they can look for deviations from it to find all the bad things happening in their enterprises.”

The challenge for CISOs now is to know and understand where adverse events are already getting detected in their existing mix of security vendor products. Then, if appropriate, CISOs should consider elevating their anomaly detection game to give their security teams even greater power to detect troubling trends, all while shielding them from alert fatigue.

What are anomalies?

Anomalies are any deviations from routine behaviors or events within a system or network, such as a sudden spike in traffic, high activity on a server when that server should be idle, or a surge in traffic from IP addresses not typical for a particular asset. Quickly identifying outlier events can help cyber teams glean early signals of a potential attack unfolding.

Matt Shriner, global threat management partner and portfolio leader at IBM Consulting, tells CSO that, like all cybersecurity-related firms, IBM almost always associates anomalies with security threats. But, Shriner says, “not all anomalies are bad. Some anomalies may highlight opportunities for architectural optimization or improving business strategies, such as adapting to retail seasonal behavior changes.”

Although predicated on advanced math concepts, anomaly detection, or as the NIST Cybersecurity Framework 2.0 calls it, “adverse event analysis,” has over the past two decades been incorporated into a wide range of cybersecurity tools, including endpoint detection and response (EDR), firewall, and security information and event management (SIEM) tools.

“In general, you can split the detection universe into two halves,” Potter says. “One is finding known bads, and then one is finding things that might be bad. Known bads are typically like a signature base where I know very specifically if I see this file or this exact thing happened on the system, it’s bad.” Known bads are typically flagged by fundamental cybersecurity tools.

“If you buy a firewall today from even the lowest kind of vendors, they’re going to have some sort of anomaly detection,” David Brumley, CEO of ForAllSecure, tells CSO. “It’s going to be at maybe the network layer, or a commonplace is WAFs [web application firewall] for intrusion detection. It’s like, ‘Hey, this looks like a bad SQL injection packet.’ It’s something that CISOs don’t have to focus on.”

Potter points out that EDR systems catch most, if not all, known bad anomalies at endpoints. “Most organizations, to be blunt, have solved the endpoint security problem,” he says. “If you’re reasonably competent, you have an EDR. If something gets through one of them, it’s just kind of a fluke.”

Andrew Krug, head of security advocacy at Datadog, singles out SIEM as security teams’ primary means for detecting anomalous behavior in their infrastructure today. “If you don’t have a facility like this, you have no way to know that something’s gone wrong,” he says.

Alert fatigue poses a significant challenge

No matter how conceptually elegant the idea of detecting anomalies might be, “the reality is it tends to be very high in both false positives and false negatives, and you spend time chasing your tail on things that aren’t bad and then things that are bad fly under the radar, and you totally miss them,” Turngate’s Potter says.

To avoid this, security operations center (SOC) personnel can set criteria to minimize false reports, “which means you’re typically more likely to detect true oddball anomalies, but you’re going to miss stealthy attacks,” ForAllSecure’s Brumley says.

On the other hand, allowing reports to fly free without filters can burn out workers. “One of the things that we talk about a lot when it comes to alerting systems is alert fatigue,” Datadog’s Krug says. “If the SIEM generates too many alerts and folks are constantly running down low-value alerts, spinning up investigations, they’re not going to enjoy working with that product.”

SOC staff who work with alerts have “one of the toughest jobs in cybersecurity,” Krug adds. “It has, I think, the shortest tenure of any of the roles. Folks don’t survive long in the SOC because they’re buried in alerts. Their quality of life isn’t high. Giving those people the ability to say, ‘This alert’s not working for me,’ and have them participate in tuning is a massive part of building an effective detection strategy.”

How CISOs can up their detection game

Standard security tools do well in flagging and even remediating adverse events involving known bad anomalies. “The signature-based universe is pretty effective,” Potter says. “Most attackers are not reinventing the wheel and will do as little work as possible to reach their objective. If they can do the same thing a hundred times and are successful 10 times, it’s probably good enough.”

But it’s hard to train computers to look for bespoke anomalies, so teeing things up for human judgment can help in certain environments. “It’s one thing to raise awareness and cause the alert to go, ‘Hey, here’s something squirrelly,’” Potter says. “It’s another thing then for a human to have the signal in front of them to be able to say, ‘Oh, yeah, that’s really weird.’”

But some experts caution against placing too much emphasis on human discernment, Datadog CISO Emilio Escobar tells CSO, “My advice to CISOs is to be open-minded about trying anomaly detection models when implementing their detection and response capabilities. With the emerging landscape of threats combined with the complexity of the IT landscape, we will always be playing catch-up if we try to do everything using human eyes or having to write direct code that handles anomalies.”

Several use cases for anomaly detection don’t fit typical signature detections of typical industry-wide trends involving ransomware, data exfiltration, or command and control signatures, IBM’s Shriner says. These include insider threats, fraud detection, IT systems management, and more.

But, before doing anything else, CISOs must first recognize they need the insights they can gain from more bespoke anomaly detection. “With a basic understanding of how that data knowledge can be used, in use cases like data exfiltration, compromised credentials, malware beaconing, and insider threats, organizations can then create a strategy for anomaly detection that fits their specific business case,” says Shriner.

Potter thinks organizations should seek balance when devising their custom anomaly detection programs. “For most organizations, you don’t have time to tinker yourself to come up with some anomaly detection capability on your own,” he says. “That’s where I think organizations get into trouble. You’re all in on signature detection, so if anything new happens, you’re blind to it.”

But then, conversely, “there are companies that have been all in on anomalies. There are literally no signatures. It’s just all math and AI and all this kind of stuff. And man, that can go wildly off the rails as well. So, I think when purchasing, you have to think about both. And the reality is most products, most mature products, are a reasonable combination of both.”

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Original Post url: https://www.csoonline.com/article/3822459/what-is-anomaly-detection-behavior-based-analysis-for-cyber-threats.html

Category & Tags: Data and Information Security, Incident Response, Intrusion Detection Software, Security, Threat and Vulnerability Management – Data and Information Security, Incident Response, Intrusion Detection Software, Security, Threat and Vulnerability Management

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post