When it comes to threat modeling, many businesses plan as if there were only a few possible scenarios in which cybersecurity or privacy-related incidents could occur. We need to plan for more cybersecurity hazards than just basic social engineering, insider threats and product vulnerabilities. Both our businesses and our customers face threats that are messier than what fits into these neat little boxes.

The Complex Emotions of Social Engineering

When most of us think of social engineering, we think of someone being psychologically manipulated into handing over sensitive information to some shadowy criminal figure. This definition implies some things that are not always accurate. The first incorrect assumption is that what everyone considers sensitive is the same from one person to the next. The second is that people are able to guard information against their attackers until they’re tricked into revealing it. 

For many people, the emotional context of social engineering is significantly more complex than we account for in traditional threat modeling. Let’s examine a few different — though unfortunately very common — situations where things get more complicated.

When Everyday Information is Extra Sensitive

Most of us do not consider our legal name to be private information. We tell it to relative strangers, and we sign it on forms or in emails that could be easily intercepted. Seeing it pop up online would not worry us. But lots of people go by chosen names other than their legal ones, and for a variety of different reasons. 

Likewise, most of us aren’t terribly concerned about strangers knowing who we spend time with. We allow ourselves to be tagged in our family members’ social media posts, we allow our friend lists to be publicly displayed, and many of us choose to allow apps to broadcast our location when other users are nearby. For most of us, this information being publicly available really isn’t a problem.

This situation is not so simple for people who need to protect their location and associations, or that of their contacts. This includes mental health professionals, journalists and social workers, whose clients and sources need to be kept private at the risk of this knowledge affecting their life or livelihood. Activists and people seeking to escape domestic violence or stalking need to closely manage who has knowledge of their whereabouts to protect their own lives. 

When the Attacker is Inside the House

As I mentioned in a recent article on stalkerware, we can’t assume that if someone is the victim of unwanted monitoring software that it’s because they failed to follow security “best practices.” Most threat modeling assumes that people are capable of completely protecting data or assets from attackers.

Statistics for child, disability and elder financial abuse, as well as for domestic violence, show that a shocking number of people experience fraud or other financial crimes when someone they know uses their sensitive information fraudulently. The perpetrator is often trusted and may be considered a carer for the victim. Access to the victim’s accounts may be a necessary part of maintaining their housing or health care.

Threat Modeling for Emotional Complexity

As security practitioners, we need to consider a wider variety of possibilities for misuse of data and systems in our care, not just those that affect the majority of people. A shocking number of companies have found themselves in a nasty PR situation because they failed to consider the harm that their products could do to people with exceptional privacy requirements. And this sort of cybersecurity or privacy incident gets much greater traction in the media, due to the emotionally charged nature of that breach of trust.

There are a few questions you can ask to help address these unique situations. 

  • Are there ways to do what you need to do without requiring customers to provide legal name or location information? 

  • Can you allow people to opt-in to providing this information, rather than requiring them to go through the steps to opt-out? 

  • How can you provide features or architect your systems in a way that can help protect people who cannot conform to security “best practices”? 

  • How will you address the concerns of employees or other staff members who have exceptional privacy requirements or who will become a victim of stalking or domestic violence?

Traditional threat modeling scenarios describe the task only in terms of enumeration and systematic analysis. In the end, it’s not just computers we’re protecting, but humans. For many people, threat modeling has a distinctly emotional component, and this is something businesses also need to address.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today