More organizations are using a threat-modeling approach to identify risks and vulnerabilities and build security into network or application design early on to better mitigate potential threats.

“Threat modeling gives you the way of seeing the forest, and a frame for communicating about the work that you (and your team) are doing and why you’re doing it,” said Adam Shostack, president of Shostack and Associates, in an article for MIS Training Institute. “More concretely, [it] involves developing a shared understanding of a product or service architecture and the problems that could happen.”

Threat Modeling Missteps

The benefits seem clear, but it’s still a relatively new strategy. So, you can expect a few stumbles along the learning curve. Here are four common threat-modeling missteps — and how to avoid them.

1. Thinking One Size Fits All

“There are so many different ways to threat-model,” said Shostack. “I routinely encounter people who read the same advice and find it doesn’t quite work for them.” Approaching threat modeling as a single, massive complex process is overwhelming and sets you off on the wrong foot, he stressed.

“I think the biggest thing I see is people who treat it as a monolith,” said Shostack. “We need to communicate the steps as if they are building blocks. If one doesn’t work for you, don’t throw out threat modeling. There is no one-size-fits-all approach.”

One well-known approach is STRIDE:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of Service
  • Elevation of privilege

Of course, this may be more appropriate for some teams than others. Regardless of approach, Shostack advises teams to look at the process as a set of building blocks that go together and break the process up into easily digestible chunks.

2. Starting With the Wrong Focus

When getting started, should you focus on assets? No. What about shifting your focus to thinking like an attacker? No again. Why?

“It’s a common recommendation, but the trouble is it’s hard to know what an attacker is going to do. It’s hard to know what their motivations are,” said Shostack. “For example, when SEC [Syrian Electronic Army] took over the Skype Twitter handle (in 2014), no one expected they were going to break into the law enforcement portal at the same time. Focusing in on the attacker might have distracted people from what they would do — rather than theorizing about their motivations.”

Shostack advocates for starting the process with software at most organizations.

“People building software or systems at a financial institution, a supply chain or a healthcare company should start from the software they’re building because it’s what they know best,” he noted in a post for The New School of Information Security blog. “Another way to say this is that they are surrounded by layers of business analysts, architects, project managers and other folks who translate between the business requirements (including assets) and software and system requirements.”

3. Neglecting the Business Side

Threat modeling is pointless if solely focuses on the network and applications, believes Itay Kozuch, director of threat research at IntSights.

“Many teams conduct common assessments from their network,” said Kozuch. “But it must come from the business side too. When an organization is trying to evaluate risk and do threat modeling, they need to understand the complete assets of the organization. That means not just IT — but on the business side as well.”

This means going beyond just the technology in the threat-modeling process. Failing to involve all of the business’s key stakeholders, Kozuch stressed, leads teams to incorrectly calculate the probability of the threats that need to be considered. He believes there are a lot of angles and perspectives for every threat.

“Management must be part of it,” said Kozuch. “It is a business issue. Risk is there because of business.”

4. Miscalculating the Shelf Life of Results

“Threats are always changing,” said Kozuch. “Often — even soon after you’ve completed the process — the results are no longer valid. You can’t base the next few years off of what you’ve uncovered because it doesn’t represent future threats.”

Archie Agarwal, founder and CEO of ThreatModeler Software agrees. A threat model, he said in a post for CSO, cannot be static. He cautioned that you can’t take a critical application, do a threat model on it once and assume you are done.

“Your threat model should be a living document,” Agarwal said. “You cannot just build a threat model and forget about it. Your applications are alive.”

Wherever you are in your exploration or implementation of threat modeling, there are many resources out there to help you get started. Check out this series on threat modeling basics for an overview of approaches and essential elements for a successful program.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today