When it comes to threat modeling, many businesses plan as if there were only a few possible scenarios in which cybersecurity or privacy-related incidents could occur. We need to plan for more cybersecurity hazards than just basic social engineering, insider threats and product vulnerabilities. Both our businesses and our customers face threats that are messier than what fits into these neat little boxes.

The Complex Emotions of Social Engineering

When most of us think of social engineering, we think of someone being psychologically manipulated into handing over sensitive information to some shadowy criminal figure. This definition implies some things that are not always accurate. The first incorrect assumption is that what everyone considers sensitive is the same from one person to the next. The second is that people are able to guard information against their attackers until they’re tricked into revealing it. 

For many people, the emotional context of social engineering is significantly more complex than we account for in traditional threat modeling. Let’s examine a few different — though unfortunately very common — situations where things get more complicated.

When Everyday Information is Extra Sensitive

Most of us do not consider our legal name to be private information. We tell it to relative strangers, and we sign it on forms or in emails that could be easily intercepted. Seeing it pop up online would not worry us. But lots of people go by chosen names other than their legal ones, and for a variety of different reasons. 

Likewise, most of us aren’t terribly concerned about strangers knowing who we spend time with. We allow ourselves to be tagged in our family members’ social media posts, we allow our friend lists to be publicly displayed, and many of us choose to allow apps to broadcast our location when other users are nearby. For most of us, this information being publicly available really isn’t a problem.

This situation is not so simple for people who need to protect their location and associations, or that of their contacts. This includes mental health professionals, journalists and social workers, whose clients and sources need to be kept private at the risk of this knowledge affecting their life or livelihood. Activists and people seeking to escape domestic violence or stalking need to closely manage who has knowledge of their whereabouts to protect their own lives. 

When the Attacker is Inside the House

As I mentioned in a recent article on stalkerware, we can’t assume that if someone is the victim of unwanted monitoring software that it’s because they failed to follow security “best practices.” Most threat modeling assumes that people are capable of completely protecting data or assets from attackers.

Statistics for child, disability and elder financial abuse, as well as for domestic violence, show that a shocking number of people experience fraud or other financial crimes when someone they know uses their sensitive information fraudulently. The perpetrator is often trusted and may be considered a carer for the victim. Access to the victim’s accounts may be a necessary part of maintaining their housing or health care.

Threat Modeling for Emotional Complexity

As security practitioners, we need to consider a wider variety of possibilities for misuse of data and systems in our care, not just those that affect the majority of people. A shocking number of companies have found themselves in a nasty PR situation because they failed to consider the harm that their products could do to people with exceptional privacy requirements. And this sort of cybersecurity or privacy incident gets much greater traction in the media, due to the emotionally charged nature of that breach of trust.

There are a few questions you can ask to help address these unique situations. 

  • Are there ways to do what you need to do without requiring customers to provide legal name or location information? 

  • Can you allow people to opt-in to providing this information, rather than requiring them to go through the steps to opt-out? 

  • How can you provide features or architect your systems in a way that can help protect people who cannot conform to security “best practices”? 

  • How will you address the concerns of employees or other staff members who have exceptional privacy requirements or who will become a victim of stalking or domestic violence?

Traditional threat modeling scenarios describe the task only in terms of enumeration and systematic analysis. In the end, it’s not just computers we’re protecting, but humans. For many people, threat modeling has a distinctly emotional component, and this is something businesses also need to address.

More from Application Security

Gozi strikes again, targeting banks, cryptocurrency and more

3 min read - In the world of cybercrime, malware plays a prominent role. One such malware, Gozi, emerged in 2006 as Gozi CRM, also known as CRM or Papras. Initially offered as a crime-as-a-service (CaaS) platform called 76Service, Gozi quickly gained notoriety for its advanced capabilities. Over time, Gozi underwent a significant transformation and became associated with other malware strains, such as Ursnif (Snifula) and Vawtrak/Neverquest. Now, in a recent campaign, Gozi has set its sights on banks, financial services and cryptocurrency platforms,…

Vulnerability management, its impact and threat modeling methodologies

7 min read - Vulnerability management is a security practice designed to avoid events that could potentially harm an organization. It is a regular ongoing process that identifies, assesses, and manages vulnerabilities across all the components of an IT ecosystem. Cybersecurity is one of the major priorities many organizations struggle to stay on top of. There is a huge increase in the number of cyberattacks carried out by cybercriminals to steal valuable information from businesses. Hence to encounter these attacks, organizations are now focusing…

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…