Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.

To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?

Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.

With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.

In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.

Round one: The rise of the machines

In one corner, we had AI-generated phishing emails with highly cunning and convincing narratives.

Creating the prompts. Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors.

To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself. Next, a prompt asked ChatGPT who the sender should be (e.g., someone internal to the company, a vendor, an outside organization, etc.). Lastly, we asked ChatGPT to add the following completions to create the phishing email:

  1. Top areas of concern for employees in the healthcare industry: Career Advancement, Job Stability, Fulfilling Work and more
  2. Social engineering techniques that should be used: Trust, Authority, Social Proof
  3. Marketing techniques that should be used: Personalization, Mobile Optimization, Call to Action
  4. Person or company it should impersonate: Internal Human Resources Manager
  5. Email generation: Given all the information listed above, ChatGPT generated the below redacted email, which was later sent by my team to more than 800 employees.

I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations that originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate. As the prompts showed, the organization that participated in this research study was in the healthcare industry, which currently is one of the most targeted industries.

Productivity gains for attackers. While a phishing email typically takes my team about 16 hours to craft, the AI phishing email was generated in just five minutes with only five simple prompts.

Round two: The human touch

In the other corner, we had seasoned X-Force Red social engineers.

Armed with creativity, and a dash of psychology, these social engineers created phishing emails that resonated with their targets on a personal level. The human element added an air of authenticity that’s often hard to replicate.

Step 1: OSINT – Our approach to phishing invariably begins with the initial phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of publicly accessible information, which subsequently undergoes rigorous analysis and serves as a foundational resource in the formulation of social engineering campaigns. Noteworthy repositories of data for our OSINT endeavors encompass platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a plethora of other sources.

During our OSINT activities, we successfully uncovered a blog post detailing the recent launch of an employee wellness program, coinciding with the completion of several prominent projects. Encouragingly, this program had favorable testimonials from employees on Glassdoor, attesting to its efficacy and employee satisfaction. Furthermore, we identified an individual responsible for managing the program via LinkedIn.

Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we initiated the process of meticulously constructing our phishing email. As a foundational step, it was imperative that we impersonated someone with authority to address the topic effectively. To enhance the aura of authenticity and familiarity, we incorporated a legitimate website link to a recently concluded project.

To add persuasive impact, we strategically integrated elements of perceived urgency by introducing “artificial time constraints.” We conveyed to the recipients that the survey in question comprised merely “five brief questions” and assured them that its completion would require no more than “a few minutes” of their valuable time and gave a deadline of “this Friday”. This deliberate framing served to underscore the minimal imposition on their schedules, reinforcing the nonintrusive nature of our approach.

Using a survey as a phishing pretext is usually risky, as it’s often seen as a red flag or simply ignored. However, considering the data we uncovered we decided that the potential benefits could outweigh the associated risks.

The following redacted phishing email was sent to over 800 employees at a global healthcare organization:

The champion: Humans triumph, but barely!

After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins.

While the human-crafted phishing emails managed to outperform AI, it was a nail-bitingly close contest. Here’s why:

  • Emotional Intelligence: Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable.
  • Personalization: In addition to incorporating the recipient’s name into the introduction of the email, we also provided a reference to a legitimate organization, delivering tangible advantages to their workforce.
  • Short and succinct subject line: The human-generated phish had an email subject line that was short and to the point (“Employee Wellness Survey”) while the AI-generated phish had an extremely lengthy subject line (“Unlock your Future: Limited Advancements at Company X”), potentially causing suspicion even before employees opened the email.

Not only did the AI-generated phish lose to humans, but it was also reported as suspicious at a higher rate.

The takeaway: A glimpse into the future

While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns. While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.

Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. As we know, attackers are constantly adapting and innovating. Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information.

While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks. Here are five key recommendations for businesses and consumers to stay prepared:

  1. When in doubt, call the sender: If you’re questioning whether an email is legitimate, pick up the phone and verify. Consider choosing a safe word with close friends and family members that you can use in the case of vishing or AI-generated phone scam.
  2. Abandon the grammar stereotype: Dispel the myth that phishing emails are riddled with bad grammar and spelling errors. AI-driven phishing attempts are increasingly sophisticated, often demonstrating grammatical correctness. That’s why it’s imperative to re-educate our employees and emphasize that grammatical errors are no longer the primary red flag. Instead, we should train them to be vigilant about the length and complexity of email content. Longer emails, often a hallmark of AI-generated text, can be a warning sign.
  3. Revamp social engineering programs: This includes bringing techniques like vishing into training programs. This technique is simple to execute, and often highly effective. An X-Force report found that targeted phishing campaigns that add phone calls were 3X more effective than those that didn’t.
  4. Strengthen identity and access management controls: Advanced identity access management systems can help validate who is accessing what data, whether they have the appropriate entitlements and that they are who they say they are.
  5. Constantly adapt and innovate: The rapid evolution of AI means that cyber criminals will continue to refine their tactics. We must adopt that same mindset of continuous adaptation and innovation. Regularly updating internal TTPS, threat detection systems and employee training materials is essential to stay one step ahead of malicious actors.

The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises and ensure the security of our data and people in today’s dynamic digital age.

For more information on X-Force’s security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub.

To learn more about how IBM can help businesses accelerate their AI journey securely visit here.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today