People have been debating the trade-off between cost and effectiveness in security technologies since the dawn of the Internet. One such debate occurred in the SearchSecurity article Schneier-Ranum Face-Off on White-Listing and Blacklisting. Marcus Ranum is the chief of security for Tenable Security and a recognized innovator in firewall and intrusion detection system technologies. He leads this debate with “security effectiveness.” He suggests blacklisting technologies have failed to keep up with the malware explosion. White listing, he insists, addresses this problem, and enterprises should accept the cost of managing a white list.

Bruce Schneier is chief security technology officer of BT Global Services and a recognized computer security technology expert, cryptographer and writer. He leads this debate with “controlling cost and complexity.” He argues that in various implementations, maintaining a blacklist is easier when the blacklist is small, compared to a huge white list.

In our opinion, both methods can be effective if applied correctly to the right context. Rather than applying a one-size-fits-all solution, the method depends on what you are trying to achieve.


Blacklisting can work effectively against non-targeted and large-scale attacks where real-time intelligence is available. Let’s take financial malware, for example. It is notoriously famous for bypassing antivirus, signature-based detection. Some solutions, like IBM Security Trusteer Rapport, use behavioral blacklisting to effectively stop those threats.

Malware developers can adjust their software to evade detection, however. A blacklisting-based control can then use real-time intelligence to detect the change across many endpoints, deploy a counter-measure through the cloud and break the attack before it can gather any steam. Because the cost of adapting the control is lower than the cost of adapting the malware, the hackers are at a major disadvantage.

This isn’t true for targeted attacks in the enterprise world. In this case, a single attack on a large enterprise can be developed over a long period of time, using zero-day exploits to evade detection, delivering advanced malware to a few endpoints and exfiltrating data using encrypted channels. In this case, blacklisting technologies cannot provide an effective solution and the targeted nature of the attack means that timely intelligence is simply not available.

White Listing

White listing makes very few assumptions about the nature of the threat because it focuses on the list of known good application files; however, managing this list is a daunting task. Imagine what is required to vet new application files introduced by employees’ downloads and installs or through updates. And there’s an ongoing concern that you could accidentally white list malware files (yes, this can happen).

Beyond additional work for the IT department, white listing places severe restrictions on knowledge workers’ productivity that goes against current trends in BYOD and IT consumerization. Does this mean that you must accept the cost of white listing if you truly want to reduce the risk of targeted attacks? Innovation should focus on using a white-listing approach that can work for large enterprises.

Tailoring White-Listing

Maybe it isn’t necessary to white-list every single good file in the universe. Employees’ endpoints are often compromised by zero-day exploits that deliver malware to the file system and execute it. If we can stop the exploitation of vulnerable Internet-facing apps (Web browsers; Adobe Reader, Acrobat and Flash; Microsoft Office and Java) by white-listing the legitimate ways they can access the file system or other processes, we can protect users when they go to the wrong Web sites and open up the wrong documents. This reduces the attack surface considerably.

If users are lured to directly install malware on the endpoint, the malware must communicate with its C&C server and the attackers to exfiltrate data. What if we could control which applications talk to the Internet and how they do it (directly or via other processes) using a tightly managed white list? It could be a great way to detect endpoint compromise before the damage is done and evasion tactics are used to fool network controls. The innovation cycle for protecting users from targeted attacks is accelerating. Solving this security challenge in a way that large enterprises can actually deploy is the Holy Grail of security.

more from Endpoint

IOCs vs. IOAs — How to Effectively Leverage Indicators

Cybersecurity teams are consistently tasked to identify cybersecurity attacks, adversarial behavior, advanced persistent threats and the dreaded zero-day vulnerability. Through this endeavor, there is a common struggle for cybersecurity practitioners and operational teams to appropriately leverage indicators of compromise (IOCs) and indicators of attack (IOAs) for an effective monitoring, detection and response strategy. Inexperienced security […]

TrickBot Gang Uses Template-Based Metaprogramming in Bazar Malware

Malware authors use various techniques to obfuscate their code and protect against reverse engineering. Techniques such as control flow obfuscation using Obfuscator-LLVM and encryption are often observed in malware samples. This post describes a specific technique that involves what is known as metaprogramming, or more specifically template-based metaprogramming, with a particular focus on its implementation […]