People have been debating the trade-off between cost and effectiveness in security technologies since the dawn of the Internet. One such debate occurred in the SearchSecurity article Schneier-Ranum Face-Off on White-Listing and Blacklisting. Marcus Ranum is the chief of security for Tenable Security and a recognized innovator in firewall and intrusion detection system technologies. He leads this debate with “security effectiveness.” He suggests blacklisting technologies have failed to keep up with the malware explosion. White listing, he insists, addresses this problem, and enterprises should accept the cost of managing a white list.

Bruce Schneier is chief security technology officer of BT Global Services and a recognized computer security technology expert, cryptographer and writer. He leads this debate with “controlling cost and complexity.” He argues that in various implementations, maintaining a blacklist is easier when the blacklist is small, compared to a huge white list.

In our opinion, both methods can be effective if applied correctly to the right context. Rather than applying a one-size-fits-all solution, the method depends on what you are trying to achieve.


Blacklisting can work effectively against non-targeted and large-scale attacks where real-time intelligence is available. Let’s take financial malware, for example. It is notoriously famous for bypassing antivirus, signature-based detection. Some solutions, like IBM Security Trusteer Rapport, use behavioral blacklisting to effectively stop those threats.

Malware developers can adjust their software to evade detection, however. A blacklisting-based control can then use real-time intelligence to detect the change across many endpoints, deploy a counter-measure through the cloud and break the attack before it can gather any steam. Because the cost of adapting the control is lower than the cost of adapting the malware, the hackers are at a major disadvantage.

This isn’t true for targeted attacks in the enterprise world. In this case, a single attack on a large enterprise can be developed over a long period of time, using zero-day exploits to evade detection, delivering advanced malware to a few endpoints and exfiltrating data using encrypted channels. In this case, blacklisting technologies cannot provide an effective solution and the targeted nature of the attack means that timely intelligence is simply not available.

White Listing

White listing makes very few assumptions about the nature of the threat because it focuses on the list of known good application files; however, managing this list is a daunting task. Imagine what is required to vet new application files introduced by employees’ downloads and installs or through updates. And there’s an ongoing concern that you could accidentally white list malware files (yes, this can happen).

Beyond additional work for the IT department, white listing places severe restrictions on knowledge workers’ productivity that goes against current trends in BYOD and IT consumerization. Does this mean that you must accept the cost of white listing if you truly want to reduce the risk of targeted attacks? Innovation should focus on using a white-listing approach that can work for large enterprises.

Tailoring White-Listing

Maybe it isn’t necessary to white-list every single good file in the universe. Employees’ endpoints are often compromised by zero-day exploits that deliver malware to the file system and execute it. If we can stop the exploitation of vulnerable Internet-facing apps (Web browsers; Adobe Reader, Acrobat and Flash; Microsoft Office and Java) by white-listing the legitimate ways they can access the file system or other processes, we can protect users when they go to the wrong Web sites and open up the wrong documents. This reduces the attack surface considerably.

If users are lured to directly install malware on the endpoint, the malware must communicate with its C&C server and the attackers to exfiltrate data. What if we could control which applications talk to the Internet and how they do it (directly or via other processes) using a tightly managed white list? It could be a great way to detect endpoint compromise before the damage is done and evasion tactics are used to fool network controls. The innovation cycle for protecting users from targeted attacks is accelerating. Solving this security challenge in a way that large enterprises can actually deploy is the Holy Grail of security.

More from Endpoint

Threat Management and Unified Endpoint Management

The worst of the pandemic may be behind us, but we continue to be impacted by it. School-aged kids are trying to catch up academically and socially after two years of disruption. Air travel is a mess. And all businesses have seen a spike in cyberattacks. Cyber threats increased by 81% while COVID-19 was at its peak, with 79% of all organizations experiencing a loss of business operations during that time. The risk of cyberattacks increased so much that the…

3 Ways EDR Can Stop Ransomware Attacks

Ransomware attacks are on the rise. While these activities are low-risk and high-reward for criminal groups, their consequences can devastate their target organizations. According to the 2022 Cost of a Data Breach report, the average cost of a ransomware attack is $4.54 million, without including the cost of the ransom itself. Ransomware breaches also took 49 days longer than the data breach average to identify and contain. Worse, criminals will often target the victim again, even after the ransom is…

How EDR Security Supports Defenders in a Data Breach

The cost of a data breach has reached an all-time high. It averaged $4.35 million in 2022, according to the newly published IBM Cost of a Data Breach Report. What’s more, 83% of organizations have faced more than one data breach, with just 17% saying this was their first data breach. What can organizations do about this? One solution is endpoint detection and response (EDR) software. Take a look at how an effective EDR solution can help your security teams. …

How to Compromise a Modern-Day Network

An insidious issue has been slowly growing under the noses of IT admins and security professionals for the past twenty years. As companies evolved to meet the technological demands of the early 2000s, they became increasingly dependent on vulnerable technology deployed within their internal network stack. While security evolved to patch known vulnerabilities, many companies have been unable to implement released patches due to a dependence on legacy technology. In just 2022 alone, X-Force Red found that 90% of all…