Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.

To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?

Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.

With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.

In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.

Round one: The rise of the machines

In one corner, we had AI-generated phishing emails with highly cunning and convincing narratives.

Creating the prompts. Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors.

To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself. Next, a prompt asked ChatGPT who the sender should be (e.g., someone internal to the company, a vendor, an outside organization, etc.). Lastly, we asked ChatGPT to add the following completions to create the phishing email:

  1. Top areas of concern for employees in the healthcare industry: Career Advancement, Job Stability, Fulfilling Work and more
  2. Social engineering techniques that should be used: Trust, Authority, Social Proof
  3. Marketing techniques that should be used: Personalization, Mobile Optimization, Call to Action
  4. Person or company it should impersonate: Internal Human Resources Manager
  5. Email generation: Given all the information listed above, ChatGPT generated the below redacted email, which was later sent by my team to more than 800 employees.

I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations that originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate. As the prompts showed, the organization that participated in this research study was in the healthcare industry, which currently is one of the most targeted industries.

Productivity gains for attackers. While a phishing email typically takes my team about 16 hours to craft, the AI phishing email was generated in just five minutes with only five simple prompts.

Round two: The human touch

In the other corner, we had seasoned X-Force Red social engineers.

Armed with creativity, and a dash of psychology, these social engineers created phishing emails that resonated with their targets on a personal level. The human element added an air of authenticity that’s often hard to replicate.

Step 1: OSINT – Our approach to phishing invariably begins with the initial phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of publicly accessible information, which subsequently undergoes rigorous analysis and serves as a foundational resource in the formulation of social engineering campaigns. Noteworthy repositories of data for our OSINT endeavors encompass platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a plethora of other sources.

During our OSINT activities, we successfully uncovered a blog post detailing the recent launch of an employee wellness program, coinciding with the completion of several prominent projects. Encouragingly, this program had favorable testimonials from employees on Glassdoor, attesting to its efficacy and employee satisfaction. Furthermore, we identified an individual responsible for managing the program via LinkedIn.

Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we initiated the process of meticulously constructing our phishing email. As a foundational step, it was imperative that we impersonated someone with authority to address the topic effectively. To enhance the aura of authenticity and familiarity, we incorporated a legitimate website link to a recently concluded project.

To add persuasive impact, we strategically integrated elements of perceived urgency by introducing “artificial time constraints.” We conveyed to the recipients that the survey in question comprised merely “five brief questions” and assured them that its completion would require no more than “a few minutes” of their valuable time and gave a deadline of “this Friday”. This deliberate framing served to underscore the minimal imposition on their schedules, reinforcing the nonintrusive nature of our approach.

Using a survey as a phishing pretext is usually risky, as it’s often seen as a red flag or simply ignored. However, considering the data we uncovered we decided that the potential benefits could outweigh the associated risks.

The following redacted phishing email was sent to over 800 employees at a global healthcare organization:

The champion: Humans triumph, but barely!

After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins.

While the human-crafted phishing emails managed to outperform AI, it was a nail-bitingly close contest. Here’s why:

  • Emotional Intelligence: Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable.
  • Personalization: In addition to incorporating the recipient’s name into the introduction of the email, we also provided a reference to a legitimate organization, delivering tangible advantages to their workforce.
  • Short and succinct subject line: The human-generated phish had an email subject line that was short and to the point (“Employee Wellness Survey”) while the AI-generated phish had an extremely lengthy subject line (“Unlock your Future: Limited Advancements at Company X”), potentially causing suspicion even before employees opened the email.

Not only did the AI-generated phish lose to humans, but it was also reported as suspicious at a higher rate.

The takeaway: A glimpse into the future

While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns. While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.

Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. As we know, attackers are constantly adapting and innovating. Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information.

While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks. Here are five key recommendations for businesses and consumers to stay prepared:

  1. When in doubt, call the sender: If you’re questioning whether an email is legitimate, pick up the phone and verify. Consider choosing a safe word with close friends and family members that you can use in the case of vishing or AI-generated phone scam.
  2. Abandon the grammar stereotype: Dispel the myth that phishing emails are riddled with bad grammar and spelling errors. AI-driven phishing attempts are increasingly sophisticated, often demonstrating grammatical correctness. That’s why it’s imperative to re-educate our employees and emphasize that grammatical errors are no longer the primary red flag. Instead, we should train them to be vigilant about the length and complexity of email content. Longer emails, often a hallmark of AI-generated text, can be a warning sign.
  3. Revamp social engineering programs: This includes bringing techniques like vishing into training programs. This technique is simple to execute, and often highly effective. An X-Force report found that targeted phishing campaigns that add phone calls were 3X more effective than those that didn’t.
  4. Strengthen identity and access management controls: Advanced identity access management systems can help validate who is accessing what data, whether they have the appropriate entitlements and that they are who they say they are.
  5. Constantly adapt and innovate: The rapid evolution of AI means that cyber criminals will continue to refine their tactics. We must adopt that same mindset of continuous adaptation and innovation. Regularly updating internal TTPS, threat detection systems and employee training materials is essential to stay one step ahead of malicious actors.

The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises and ensure the security of our data and people in today’s dynamic digital age.

For more information on X-Force’s security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub.

To learn more about how IBM can help businesses accelerate their AI journey securely visit here.

More from Artificial Intelligence

How generative AI Is expanding the insider threat attack surface

3 min read - As the adoption of generative AI (GenAI) soars, so too does the risk of insider threats. This puts even more pressure on businesses to rethink security and confidentiality policies.In just a few years, artificial intelligence (AI) has radically changed the world of work. 61% of knowledge workers now use GenAI tools — particularly OpenAI’s ChatGPT — in their daily routines. At the same time, business leaders, often partly driven by a fear of missing out, are investing billions in tools…

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today