November 4, 2019 By Douglas Bonderud 4 min read

Cybersecurity risk management is not a purely technical or theoretical endeavor. Information security investments now inform security supply with the aim of reducing data breaches and boosting public perception. However, the demand for greater spending doesn’t necessarily equate to improved defenses.

The disconnect between increased resource allocation and actual readiness stems from the idea that cybersecurity issues play out logically and can be resolved with the usual fixes. In practice, however, both cybersecurity outcomes and anticipated criminal behaviors may not align with rational expectations. Fortunately, there’s an unexpected source of insights about how to reduce cyber risk, manage security supply and guard digital systems: economic theory. Here are three concepts from economic theory you can apply for effective infosec.

Behavioral Economics: Assumed Strength Is Often Critical Weakness

Any cyber risk reduction strategy starts with looking inward and identifying the best practices and IT solutions you already have in place to mitigate potential attacks both actively and passively. Overconfidence can create problems here. As Channel Futures noted, this is a growing issue for IT departments dealing with an increasingly complex landscape of both internal and cloud-based security controls. In fact, recent data from Hiscox suggested that 73 percent of companies aren’t prepared to handle cyberattacks, despite the vast array of infosec tools now available.

The theory of behavioral economics offers an explanation: Despite best intentions, both individuals and organizations don’t always act in their own rational self-interest. For cybersecurity initiatives, two key concepts inform this issue:

  • Planning fallacyAs Behavioral Economics observed, humans are naturally susceptible to the planning fallacy. We underestimate how long tasks will take and often ignore past experience. This is often seen in cybersecurity: “Surely, new infosec deployments won’t cause networkwide conflicts,” or “data breaches in similar industries or markets were probably just anomalies.”
  • The illusion of controlInformed by a seminal 1975 study, the illusion of control is “an expectancy of a personal success probability inappropriately higher than the objective probability would warrant.” This is a common mistake in cybersecurity risk management — having up-to-date security controls is often conflated with reduced risk, but this may not align with reality. As Security Magazine noted, 80 percent of U.S. businesses expected a “critical breach” this year.

Effectively applying behavioral economic theory to IT security means recognizing that assumed strength is often critical weakness. Aspects of your network or mobile device defenses that appear unassailable may be markers of the planning fallacy or the illusion of control. Consistent, calculated vulnerability analyses conducted by third-party providers who aren’t subject to the same potential biases as internal teams can help in these situations.

Asymmetric Information: Recognize That You’re at a Disadvantage

Many ideal economic models assume perfect information symmetry — both parties know the same things at the same time. In cybersecurity, as in the real world, this isn’t the case. Attackers often have much more in-depth data about potential targets than defenders have about threat vectors. This illustrates the economic concept of “information asymmetry.”

Consider the growing use of darknet resources and malware-as-a-service (MaaS) deployments. Attackers are now willing to share data about potential targets, which gives them a critical advantage when it comes to compromising corporate security. From company email lists to social media data from key personnel to information about the open-source technologies used in building mission-critical applications, attackers have the upper hand when it comes to information gathering. As a result, defensive postures become reactionary as organizations respond to potential threats instead of actively safeguarding weak points.

The first step is recognizing that you’re at a disadvantage. Even with strong defenses, threat actors have stolen a march on data gathering and strategy development. Next, implement corrective actions to reduce information asymmetry. This includes the deployment of threat hunting tools and advanced information-gathering practices to level the playing field and reduce the risk of unexpected attacks.

Game Theory: Reality Is Almost Always Different From Your Expectations

Game theory suggests that in non-cooperative games, where participants make nonbinding agreements to achieve end goals, each party bases their decisions on how they expect other parties to act without knowing how they will actually behave. Ransomware is a good example of game theory in practice. Companies often assume that attackers will release encrypted data once they’ve been paid because their demands have been met.

But since the objective of any non-cooperative game is maximizing individual participant outcomes rather than benefiting the whole, attackers don’t always live up to their end of the bargain. In fact, Forbes reported that just 19 percent of victims get their data back after paying a ransom, while 86 percent are able to recover their information from backups if they refuse to pay.

Phishing is another exemplar for game theory. Employees often assume they know what phishing attacks look like and what attackers are after. In many cases, however, sophisticated attacks aren’t using error-ridden emails to compromise credentials directly. Instead, they’re creating well-crafted messages that are seemingly innocuous, leading staff to believe they’re playing one game when attackers have switched out the board and raised the stakes. The impact is evident in reported phishing statistics. As Hashed Out noted, nearly one-third of all breaches in 2018 involved phishing, and one in every 25 branded emails is a phish in disguise.

Accounting for non-cooperative play in cybersecurity risk management means helping staff recognize the divide between expectation and reality in attack behavior. This includes red-team exercises to test potential reactions in a secure environment as well as regular employee training that emphasizes the need for staff to speak up if they see something amiss.

Combat Infosec Instability by Reducing Cyber Risk

Effective cybersecurity strategies help reduce the economic impact of data breaches. The 2019 Cost of a Data Breach Report pointed out how robust incident response plans can save companies more than $300,000 per incident.

But infosec security can be threatened by a trifecta of unhelpful factors: unintentionally irrational behavior, asymmetric information and an assumption of actor outcomes. These pitfalls tend to drive up cybersecurity costs. Fortunately, in-depth network analysis, actionable threat information and ongoing education informed by principles from economic theory can provide a framework for how to reduce cyber risk.

More from Risk Management

Working in the security clearance world: How security clearances impact jobs

2 min read - We recently published an article about the importance of security clearances for roles across various sectors, particularly those associated with national security and defense.But obtaining a clearance is only part of the journey. Maintaining and potentially expanding your clearance over time requires continued diligence and adherence to stringent guidelines.This brief explainer discusses the duration of security clearances, the recurring processes involved in maintaining them and possibilities for expansion, as well as the economic benefits of these credentialed positions.Duration of security…

Remote access risks on the rise with CVE-2024-1708 and CVE-2024-1709

4 min read - On February 19, ConnectWise reported two vulnerabilities in its ScreenConnect product, CVE-2024-1708 and 1709. The first is an authentication bypass vulnerability, and the second is a path traversal vulnerability. Both made it possible for attackers to bypass authentication processes and execute remote code.While ConnectWise initially reported that the vulnerabilities had proof-of-concept but hadn’t been spotted in the wild, reports from customers quickly made it clear that hackers were actively exploring both flaws. As a result, the company created patches for…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today