Did you know the software that powers our brains contains security flaws that need to be patched? I’m talking about cognitive biases, which are the wetware vulnerabilities that collectively constitute the single greatest threat to enterprise data security.
The Interaction Design Foundation defines cognitive bias as “an umbrella term that refers to the systematic ways in which the context and framing of information influence individuals’ judgment and decision-making.”
In other words? A cognitive bias is simply a logical error in thinking that’s as human as the enjoyment of cupcakes and rainbows. Yes, people are irrational, but irrationality is a generality. Cognitive biases, on the other hand, are specific and defined.
Cognitive Biases Put Data Security at Risk
Don’t confuse cognitive biases (which describe thought processes) with logical fallacies (which describe flaws in arguments during communication). The former is about thoughts, and the latter is about words. This is significant because cognitive bias is one of the biggest reasons why enterprise data can be made insecure. In fact, these logical errors are a significant reason why 27 percent of employees fail social engineering tests.
Social engineering is nothing more than a systematic exploitation of human cognitive biases. Successful phishing attackers, for example, know how to use cognitive biases to convince recipients to voluntarily open links that they wouldn’t click if their actions were based on perfect logic.
Here’s another example of how cognitive biases can compromise security: Let’s say a member of an organization’s computer security incident response team (CSIRT) is confronted with a new breach. Considering a list of possible causes, someone with an anchoring bias might fixate on the first possibility considered instead of the most likely one. Another person with the availability heuristic cognitive bias might consider only potential sources that happen to come to mind — rather than taking a systematic approach that considers all possibilities. Another person suffering from the Dunning-Kruger effect, a cognitive bias that causes a subject to overestimate his or her abilities, might choose to investigate and solve the issue alone rather than bringing in colleagues, consultants or specialists.
In each of these cases, the responder fails to approach the problem systematically and with reason. Instead, he or she allows cognitive biases to muddle the process — creating unnecessary cost, consuming too much time and introducing potential risk.
Given the rapidly increasing volume and frequency of cyberthreats, it’s more important than ever to address cognitive bias head-on. Investing in the right incident response platform (IRP) can go a long way toward eliminating cognitive bias-driven decision-making.
Logical Flaws Lead to Security Lapses
I’ve learned to watch out for attentional bias as a writer, which is where perception can be affected by one’s reoccurring thoughts. This bias, for example, can potentially become a security risk when it comes to writing and interpreting technical documentation related to software or hardware features.
Creators of documentation must first become extremely familiar with the issues, technologies, processes and methods they’re documenting. Because these factors are top of mind, descriptions might gloss over or omit contextual cues for readers who have a different set of ideas in mind or are less familiar with the issues at hand. In other words: What seems obvious to the writer might be a source of confusion for the reader — with neither party able to relate to the other’s point of view.
The 2018 RSA Survey of 155 IT professionals at the RSA Conference in May found that 26 percent of companies ignore security bugs because they believe they don’t have time to fix them. The problem, however, is dealing with the consequences of unfixed bugs tends to take longer than it would’ve taken to implement the initial fix in the first place.
This could be the result of a cognitive bias called hyperbolic discounting, where choices that benefit the present self are given priority over those that benefit the future self. In this context, the benefits of ignoring a bug now are given more weight than the cost of dealing with the problem later.
The survey also revealed that IT professionals deliberately ignore security holes for other reasons, including a lack of knowledge about how to proceed. This choice could be driven by the ambiguity effect cognitive bias, where a lack of information informs a decision. Because the path to troubleshooting a problem is unclear, that path is rejected.
Finally, less than half of the organizations surveyed said they patch vulnerabilities as soon as they’re known. Eight percent of respondents even reported that they apply patches just once or twice per year. This is good, old-fashioned procrastination — which, of course, is also a cognitive bias.
Understanding Biases to Reduce Human Error
Awareness about specific cognitive biases must be a core part of every security training exercise. The first step toward overcoming cognitive biases is for everyone to understand that they exist, they’re pervasive and they have a negative impact on data security. Cognitive biases are also the reason for best practices, which embody institutional learning and lessons that reduce reliance on individual thought processes.
Most importantly, security professionals must overcome the biases that enable biases. At many organizations, security specialists fail to understand the perspective of less technical users. This lack of understanding is a cognitive bias called the curse of knowledge, and it can result in false assumptions and poor communication.
But the mother of all cognitive biases is that only other people have cognitive biases. This belief is called the bias blind spot. The truth is that cognitive biases are just part of being human. I have them, you have them — and nobody is immune.
It’s important for security leaders to base their decision-making on this inescapable fact and frequently patch the wetware bug that constitutes the biggest threat to your organization’s security.