There seems to be a collective sense that we’re all being pressured to divulge more about ourselves online than we really should. Yet few of us are aware how much friction is being built into websites and apps to compel us to give up our privacy. Those design choices that guide us against our own self interests are called dark patterns. And they’re something we should all be aware of so we can make more informed choices about protecting our data privacy — and that of our customers.

You’ve probably had the experience of navigating a store that’s so labyrinthine that there is no simple way to go in and find what you’re looking for. Or, you’ve been through a checkout line that runs you through a gauntlet of items for impulse purchase. These things may be annoying, but they’re fairly benign ways to influence you to purchase more than you might otherwise.

On the internet, retailers and services are ratcheting up manipulation techniques to extract more money from you and/or your data. These practices may be things as seemingly subtle as word choice or the color of a button, or they may be more obvious, such as hiding menu options deeply and in nonintuitive places. This friction can have a subtle, negative impact on a customer’s impression of an organization, which can weigh more heavily after a company suffers a privacy incident.

What Are Dark Patterns?

Making truly usable technology is an art and a science that most people give little thought to. There are many things that we take for granted about the flow of information, but font choice, or which words are used to communicate an idea, can make a very big difference in how we use an interface.

I don’t know about you, but since most checkout lines now have some sort of computerized interface where you input your payment card, there are certain stores where I will almost invariably screw up the process. These are usually the machines where someone has taped a sticky note to the corner, with instructions written in all caps or with several exclamation marks.

“You have to select ‘Yes’ first and then wait for the blinking blue arrows to illuminate before you put your card in!!!”

These bad design decisions create friction in the process, but they do so unintentionally. Some apps and sites are finding ways to subvert their user interfaces in ways that intentionally manipulate users into doing things they wouldn’t normally do. These methods are collectively known as dark patterns.

Dark patterns are tricks that compel you to do things that you didn’t mean to, such as purchasing products you didn’t really intend to purchase or divulging information you might not otherwise have divulged. In the context of data privacy, this could include wording options in a way that make it unclear what action you’re taking, or making certain actions — such as unsubscribing or deleting data — prohibitively difficult to accomplish.

How to Avoid Dark Patterns in Your Organization

It might seem obvious that manipulating users into doing things they don’t intend to is not a good thing. But there’s always a push to get more users and more sales, and this often leads companies to use high-pressure tactics to compel customers to behave in certain ways.

On the other hand, eschewing trickery engenders digital trust in your customer base. Getting informed consent from your customers when you ask them to share sensitive information to use your site or service can lead to fewer — or less severe — bad press moments if you do have a security or privacy incident.

It’s possible to create mutually beneficial, long-term relationships with your customers by being clear and transparent. This is especially true when industry trends are heading toward manipulative design, because you will stand out as a shining example of a site or service that is usable and trustworthy. Over time, these sketchy practices become less effective as the shock value wears off and consumers wise up to dark patterns. Now is the time to get ahead of the curve to create digital trust in your data gathering practices.

As TechCrunch described, there are many ways deceptive design is used to create friction when customers try to exercise their autonomy. Here are a few ways to avoid bogging down your user experience with common dark patterns.

Avoid Manipulative Language and Actions

“No thanks, I don’t like saving money!” and similarly sarcastic or shame-inducing language might seem spunky, but it’s more likely to receive eye-rolling or indignation. Customers may initially be motivated by countdown timers or messages about how limited the supply of an item is, but if every interaction receives the same message, the overall feeling is that this behavior is deceptive.

There’s a subtle difference between receiving confirmation of an action, especially when deleting data or an account, and badgering. Avoid repeating the same “Are you sure?” message several times, even when worded differently each time — and especially if any version is meant to create guilt, fear, uncertainty or doubt.

Use Clear Phrasing

Whenever you’re asking a customer to select or confirm an action, it’s important to make sure your choice of wording is as clear as possible. Err on the side of concise wording, and avoid double negatives. When possible, use button labels that describe what the selection will do — e.g., “delete,” “discard,” “save,” etc. — rather than “OK,” “cancel” or “ignore.”

Make Options Easily Available

If you make it easy for your users to delete data or cancel an action, especially where their privacy is concerned, they might be more likely to share data or make future transactions. If their experience canceling, unsubscribing or deleting their account is as positive as possible, they leave with a better “last” impression and may be more inclined to revisit your site or service in the future.

Understand What Data You Truly Need — and Why

The best way to protect your customers’ data privacy is to gather as little sensitive information as possible. When making decisions about what information to gather from customers, understand why you’re asking for it in the first place. If you can accomplish what you need to with less — or, at least, less sensitive — information, you’ll have less data to protect. And when you understand your reasons for gathering information, you can make this choice clear to your customers.

Being clear and honest with customers is always good business. As online interactions become more complex, we need to work harder to make sure customers comprehend what’s going on when they take actions or share their information. By making usable sites and apps, we can help protect and maintain good relationships with our customers as well as their data privacy.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today