June 12, 2019 By Christophe Veltsos 6 min read

Remember the last time you found yourself frustrated at the risky actions of an employee? Or perhaps it was the way top leadership decided to ignore your advice about, or budget request for, a key cybersecurity project? While risk assessment is a regular topic of conversation across the business today, when it comes to practicing good risk estimations and decisions in our daily lives, humans have some serious shortcomings.

We don’t have to look far to find examples of those shortcomings; we can find them literally down the street. Here are three types of poor risk decisions and some lessons from the streets to remind cybersecurity professionals of the need to compensate for our inherent — and very human — limitations.

Poor Decisions Abound, But They Can Be Corrected

People make poor decisions all the time. Unfortunately, the usual lack of negative consequences reinforces the “good quality” of these poor decisions in our brains’ neural networks. Once we accept that humans excel at making poor risk assessments in our everyday lives, we can look for ways to compensate for these shortcomings. We can learn to retrain our natural inclinations and improve processes to draw out multiple viewpoints, improve our analysis of frequency and impact, and, ultimately, via improved risk assessment, make better cybersecurity risk decisions.

A prime example of a poor risk assessment is thinking something can’t or won’t happen to us, especially because it has never happened thus far.

1. This Won’t/Can’t Happen to Us (It Never Has Before)

Imagine you’re driving along while maintaining a safe speed and safe distance from the moderately heavy traffic around you. Suddenly, a car comes zooming by, driving 10 or even 20 miles per hour faster than you. “How dangerous!” you think to yourself. As you glance at the speeding car passing you by, you also notice that the driver is distracted, perhaps fiddling with their smartphone. Not only is the driver of the vehicle putting themselves at risk, but they’re also creating risks for everyone around them.

The latest statistics from the U.S. National Highway Traffic Safety Administration found that 9 percent of all fatal crashes in 2017 were connected to distracted driving, representing more than 3,000 fatalities. The report further noted that 6 percent of drivers involved in fatal crashes were distracted at the time of the crash.

The distracted and speeding driver exemplifies a failure to adequately consider the probability of a crash event happening. Our brains are wired to look for patterns and, unfortunately, the longer we engage in risky behavior without suffering a negative consequence, the more deeply the pattern is embedded and the more confident we become that the event won’t happen. It’s easy to see how this thinking can lead decision-makers to ignore the current rash of business email compromise and ransomware attacks as a case of, “It’s never happened to us before, why would it happen to us now?”

Security leaders who face resistance when sharing their risk assessments or when requesting a new budget item should remember that the longer the organization goes without an incident, the easier it is for top leadership to fall for this mental trick. The fact that nothing happened in the past quarter, the past year or even the past decade isn’t necessarily a sign that everything is under control.

Instead, as the threat landscape shifts and businesses initiate their digital transformation, the organization now faces a different range of threats that is the equivalent of driving too fast and too distracted for the amount of traffic on the roads.

Estimating the likelihood of a cybersecurity incident requires diligence and objectivity, but is entirely achievable by the security function today. Organizations should regularly review and overhaul the ways they keep track of and manage known threats and their exposure to attacks. They can leverage state-of-the-art platforms that provide a threat assessment, assist with performance tuning of alerting rules, and automate monitoring functions. And those threats can vary based on the “roads” the business is traveling on and the “traffic” it is likely to encounter.

Further improvements in cyber resilience happen when we shift the conversation from asking, “Can this happen to us?” to asking, “What is the full range of cyber events that are plausible, and what are their consequences?” As attackers continuously invest in their tactics, techniques and procedures, it is critical for organizations — via top leadership — to start asking better questions.

2. We’ll See It Coming and Will Have Plenty of Time to React

Overconfidence in our ability to react is another mental shortcoming that is often found alongside our knack for convincing ourselves that something won’t happen.

The next time it rains — or, depending on where you live, snows — notice how many drivers around you continue to drive as if they were driving on dry roads. These drivers are counting on their reaction times and their vehicles’ enhanced braking systems to get them out of a jam. But statistics from the U.S. Federal Highway Administration show that each year, more than 1 million car accidents — or 21 percent of all accidents — are due to adverse weather or slick road conditions. Research published by the American Journal of Public Health found that while rainy days saw a 6 percent increase in fatal crashes compared to dry-weather days, first-snow days (first snow in a month) were correlated with a 30 percent increase in fatal car accidents.

People underestimate how quickly a risky behavior can spin out of control, well beyond their ability to recover from it. Our reflexes just aren’t as good as we think they are. The exception to this is people who routinely and systematically measure and train their reflexes, such as first responders and pilots. Unless you are in this minority, it is reckless to let yourself think that you’ll be able to steer out of a crash or brake your way to a safe stop. First responders and pilots go through detailed and documented debriefing sessions to extract maximum lessons from their practice runs.

So, how well does your organization test its own reflexes? If dwell time is measured on average in hundreds of days, is your organization really that much more effective than its competitors to afford feeling confident that you’ll be able to detect an intruder in a matter of hours or days?

Many businesses spend tens of thousands of dollars doing a yearly penetration test, yet few also take this opportunity to test their defenses in real time while the adversarial activities are happening. How can you have confidence in your reaction time if it isn’t something you put to the test on a frequent and regular basis? Security leaders must ensure that detection and reaction times are measured and reported on frequently and in a way that is relevant to business leaders. Reaction times can be put to the test during external engagements, and internal red team — or purple team — exercises can be helpful if the organization is large enough. Response and recovery capabilities can also be tested during the next business continuity exercise.

3. Even If It Happens, It Won’t Be a Big Deal

Obviously drivers have progress to make, but so do pedestrians. Spend a few minutes by a crosswalk and you’ll quickly notice the number of people who make poor risk decisions about something as basic as crossing the road. You’ll notice many people crossing at the wrong time — when the traffic signal is specifically indicating this is not a good time to cross. And when the designated crosswalk is one devoid of a traffic signal, we can observe pedestrians underestimating the time required to cross the street, the speed of an oncoming vehicle or the road conditions. This example highlights our tendency to underestimate the impact of a poor risk decision. We often use that to justify taking a bigger risk than we should, and it’s often stacked with the other two reasoning biases.

In cybersecurity, we often underestimate the impact and disruption that can accompany a cybersecurity risk event. Just because your previous run-in with ransomware wasn’t a big deal doesn’t mean the next time will be a breeze. The failure to conduct root cause analysis and draw honest lessons learned has left many organizations with a false sense of security. Security leaders must fully leverage incidents and near-misses to draw out lessons learned.

This also requires engaging with top leadership in tabletop or simulation exercises to practice what-if scenarios. What if this business function is infected and needs to be shut down? How many days can we continue operating at full capacity? If a third-party vendor is compromised, what would be the impact on our ability to safeguard customer data and critical operations?

Think of it this way: Organizations are the pedestrians trying to cross the roads, and attackers are coming at us at full speed.

Poor Risk Assessment Is Just Human Nature

It can be frustrating to see how many of us struggle to make good risk decisions in our daily lives. But realizing that it tends to be the default way that we perceive and navigate through everyday threats can serve as a good reminder that we need to improve our risk assessment processes, our ability to foresee likelihood and impact, and our own measure of reaction time. Our human neural networks have gotten us this far on the digital road, but we can’t afford to be complacent with how we track, communicate and act on cybersecurity risks.

More from Risk Management

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should Security Operations teams take away from the IBM X-Force 2024 Threat Intelligence Index?

3 min read - The IBM X-Force 2024 Threat Intelligence Index has been released. The headlines are in and among them are the fact that a global identity crisis is emerging. X-Force noted a 71% increase year-to-year in attacks using valid credentials.In this blog post, I’ll explore three cybersecurity recommendations from the Threat Intelligence Index, and define a checklist your Security Operations Center (SOC) should consider as you help your organization manage identity risk.The report identified six action items:Remove identity silosReduce the risk of…

Obtaining security clearance: Hurdles and requirements

3 min read - As security moves closer to the top of the operational priority list for private and public organizations, needing to obtain a security clearance for jobs is more commonplace. Security clearance is a prerequisite for a wide range of roles, especially those related to national security and defense.Obtaining that clearance, however, is far from simple. The process often involves scrutinizing one’s background, financial history and even personal character. Let’s briefly explore some of the hurdles, expectations and requirements of obtaining a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today