Detection Is NOT the New Prevention

What Does It Take to Stop Today’s Cyber Attackers? Is Prevention Pretty Much Dead?

Our previous blog on advanced threat protection discussed the necessary coordination required within a system to defend against advanced threats, which is very similar to our ever-active immune systems, which detect, block and counterattack invaders. This level of coordination is applicable to advanced threat protection, where there is a necessity to both detect and prevent attacks.

Unfortunately, there has been a dangerous shift recently in our industry’s prevailing wisdom across solution vendors, analysts and the very organizations looking to protect themselves: The new perspective is that sophisticated, persistent and well-funded attacks are too difficult to prevent, making detection the top priority. In fact, some security solution vendors go so far as to say that “detection is the new prevention.” This notion moves us unnecessarily and dangerously in the wrong direction.

If we look once again at our immune system analogy, we see a sophisticated level of communication and coordination to identify and ultimately counterattack elements. How successful would our immune systems be if their operation stopped at detection of infections? Not at all. Similarly, in our IT environment, our threat protection systems cannot stop at threat identification: They must strive for a disruption of the attack in near-real time. This approach does not diminish the value of early detection nor does it claim that prevention is possible 100 percent of the time. Rather, it asserts that both prevention and detection are required, working in a coordinated manner. Yes, prevention is a challenge, but it is possible — just because it has gotten more difficult does not mean that we should stop trying.

The Attack Opportunity Lifecycle

How did we get to the point where we’re ready to throw up our hands and give up the fight? To understand this, we need a brief history lesson on how we’ve attempted to deal with threats over the years and how they typically progress.

The image below illustrates the Attack Opportunity Lifecycle. Simply put, this is how a particular opportunity for compromise comes into existence, thrives and eventually fades.

Attack Opportunity Lifecycle: Vulnerability Creation, Vulnerability Discovery, Exploit Creation, Exploit Discovery, Exploit Mitigation

  1. Vulnerability Creation: Improper coding results in a weakness within a piece of software.
  2. Vulnerability Discovery: The vulnerability is discovered either by researchers and made public so that the software’s creator can fix it or by attackers who quietly leverage it for malicious purposes.
  3. Exploit Creation: An attack, or “exploit,” is created that takes advantage of the discovered vulnerability. “Zero-day” exploits, or exploits that target unknown vulnerabilities, are worth the most since they carry with them the element of surprise.
  4. Exploit Discovery: The exploit is eventually discovered and made public, usually (and unfortunately) during forensic investigations following a breach.
  5. Exploit Mitigation: Security solution vendors instrument their solutions to look for the newly discovered exploit on networks and endpoints. This begins the devaluation of the exploit and ultimately the death of the attack opportunity.

In the early days of information security, the aim was to focus on the exploit mitigation stage and create signatures that would detect already-discovered exploits. This was — and in many ways still is — the approach favored by antivirus products and pattern-matching intrusion detection/prevention systems (IDS/IPS). This is clearly a reactive approach since protection cannot be offered until the exploit emerges and is discovered in the wild. Additionally, as new variants of the exploit inevitably appear, more signatures must be created in order to keep up with them. Even with these failings, however, exploit mitigation was quite effective in the days when threats were not targeted at specific organizations due to the fact that any given organization had a low probability of being the first to encounter a given exploit.

Now that we live in a world of targeted threats focused on specific organizations, the focus of security solution vendors has shifted towards the exploit discovery phase. The primary technique for this is known as sandboxing, in which a potentially malicious piece of code (i.e., “malware”) is allowed to execute in a contained virtual environment where it can do no harm to its intended target. Sandboxing is most commonly done on the network, where it has the greatest visibility. The challenge with this approach is that it takes minutes to virtually execute a file, not the microseconds that would be required in order to operate these solutions “in-line” without impacting operations.

Thus, even though sandboxing solutions can be very good at detecting previously unknown exploits, guess what? They are not preventing anything, and we are no closer to actually stopping advanced threats. Endpoint sandboxing solutions offer a greater opportunity for prevention, but specialized hardware requirements and poor user experience when operating in-line render them less than ideal for many organizations. Hence, “detection is the new prevention.”

The Light Ahead

There is hope, however, and a potential happy ending to this story. You see, the sandboxing solutions are right in that the primary place to focus is, in fact, exploit discovery; they’re simply limited in the protection impact they can provide. Rather than executing the malware in a time-intensive simulated environment, the right approach is to instrument network and endpoint security solutions with the ability to detect malicious behavior in real time, when it can actually be stopped and prevented. Sounds great — but how do we do this?

On the endpoint, anti-malware solutions must have a deep understanding of the most commonly exploited and widely used applications that process untrusted external content (Web browsers, Adobe Acrobat, Flash, Java, MS Office, etc). With this knowledge, an endpoint solution can analyze what each application is doing and why it is doing it. This is exactly the approach that Trusteer Apex utilizes, which allows it to automatically and accurately determine if an application action is legitimate or malicious. Most importantly, it is able to do this in real time, which allows it to block and therefore prevent the malicious actions attempted by advanced threats.

On the network, intrusion prevention systems should take a true protocol analysis approach rather than simply pattern matching on known signatures. IBM’s Network Protection XGS product line utilizes a variety of behavioral capabilities that allow it to block patterns of malicious behavior:

  • Shell Code Heuristics: Able to block files containing malicious shell code
  • Java Heuristics: Able to block files containing malicious Java and JavaScript code
  • Web Injection Logic: Able to block never-before-seen SQL injection and command injection attacks
  • Vulnerability Decodes: Able to block mutated exploits without the need for updates

This approach has proven to be tremendously effective in actually preventing advanced threats, doing so in many cases months and even years before the discovery of a given vulnerability.

It’s time to resume the fight with a more complete strategy that identifies the problem and implements an active solution. Detection is not the new prevention; our mission is, as it always has been, to stop advanced threats, not just become proficient at detecting them quickly. With the right approach and the right technology, it is more than a theoretically feasible goal, and it is one that is far too important to give up on.

Share this Article:
Jim Brennan

Vice President of Strategy and Offering Management, IBM Security

Jim Brennan serves as the Vice President of Strategy and Offering Management for the Security Operations & Response portfolio of IBM Security. He has over 20 years of experience in the development, management and marketing of technology products, 16 of which have been focused on information security. Jim joined IBM from Dell SecureWorks, where he led the product management team responsible for infrastructure security services. Earlier in his career, Jim spent eight years with Internet Security Systems (now part of IBM), where he held positions in research and development, product management and product marketing.Other notable roles include positions with Red Hat, EMS Technologies and the U.S. Department of Defense. Jim holds a bachelor's degree in Mechanical Engineering from the Georgia Institute of Technology, and a Master of Business Administration from the Goizueta Business School at Emory University.