Online shopping bots are not new to the e-commerce world. Stores use bots to offer better customer service, but malicious bots can cause major harm to a business. These pose cybersecurity risks to e-commerce retailers and consumers alike.

Some customers use shopping bots to execute automated tasks based on a set of instructions, such as log onto website -> look for specific product -> add product to cart -> check out. Almost all shopping bots have an unfair advantage. For example, if a user wanted to manually wait for a restock of their favorite items, such as sought-after sporting event tickets or collectible trading cards, they would have to sit by their computer all day and refresh their browser by hand.

However, shopping bots do this work for them. They could program the software to search for a specific string on a certain website. When that happens, the bot runs a task to add the product into the shopping cart and check out or, in some cases, notify an email address. If shopping bots work correctly and in parallel with each other, the sought-after product usually sells out quickly.

How Shopping Bots Can Pose Cybersecurity Risks

The general impression of a shopping bot is that it makes sales. So, what could the problem be with shopping bots?

While good bots are welcome, some bots can be malicious, especially if they are in the wrong hands. One survey showed that businesses have lost more than $100,000 in revenue from a single bot attack.

E-commerce sites being attacked by bad shopping bots are not new. An Imperva report presented the following statistics:

  • Bots comprise 30.8% of traffic to e-commerce websites
  • Of all the traffic to e-commerce sites, 17.7% comes from bad bots
  • Nearly 23.5% of these bad bots qualify as sophisticated bots.

So, how can you tell a good bot from a bad one? Some types can pose more business and cybersecurity risks to online retailers and customers than others.

Credential Stuffing

These bots pretend to interact with the system as real customers by using customers’ real identities, obtained either from the internet or bought from the dark web. Such bots compromise vulnerable passwords to obtain user credentials. The stolen information can include email addresses, credit card numbers and other information. It enables these adversaries to launch cyberattacks like phishing, business email compromise and malware attacks. These bots affect the confidentiality, integrity and availability of data in systems and could have a negative impact on a firm’s reputation.

Inventory Denial

Sometimes, it becomes virtually impossible to purchase a product online because it is sold out. This could be the work of inventory denial bots. These mimic human traffic to access e-commerce websites and fill items in large volumes in checkout baskets. This act fools the system into thinking that the inventory has been sold out. As a result, it causes negative feedback from customers about the targeted brand on social media. Threat actors behind such malicious bots do not purchase the items right away. Instead, they offer them for sale on alternative websites at higher prices. Once the customer places the order, the bot completes the transactions by off-loading the carts, helping the malicious actors earn a profit in the bargain.

Scalping Bots

Scalping bots search the internet for limited-availability products, which could be out of stock when users look for them. These bots automatically add the items to the cart the moment they become available, autofill the purchase forms and perform checkout in a short time so that the real customers who are waiting for the items can’t purchase them. Besides causing financial loss to the business, scalping bots rob it of the chance to know who its real customers are. These bots prevent the business from cross-selling products and engaging with customers to promote other merchandise.

Scraper Bots

Scraper bots scan web pages and browse for items and vulnerabilities to scrape them into a dark web library. These bots use application programming interfaces to place orders and complete transactions without navigating an e-commerce website as humans do. Thus, they act like inventory denial bots to cause sell-outs or even website crashes. Malicious actors use such data to undercut deals from genuine retailers by lowering their prices.

Keeping Ahead of Shopping Bots

Shopping bots can harm business reputation by tarnishing brand image, crashing websites, increasing support costs, jeopardizing business deals, severing connections with customers and negatively affecting crucial decision-making processes. Besides, these bots contain valuable data that the adversaries behind them can exploit for profit.

This is another reason retailers should be sure to adopt the right cybersecurity measures. Stay updated on how threat actors work and how they can use these bots to infiltrate your information assets.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today