Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.

To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?

Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.

With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes — the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.

In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.

Round one: The rise of the machines

In one corner, we had AI-generated phishing emails with highly cunning and convincing narratives.

Creating the prompts. Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors.

To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself. Next, a prompt asked ChatGPT who the sender should be (e.g., someone internal to the company, a vendor, an outside organization, etc.). Lastly, we asked ChatGPT to add the following completions to create the phishing email:

  1. Top areas of concern for employees in the healthcare industry: Career Advancement, Job Stability, Fulfilling Work and more
  2. Social engineering techniques that should be used: Trust, Authority, Social Proof
  3. Marketing techniques that should be used: Personalization, Mobile Optimization, Call to Action
  4. Person or company it should impersonate: Internal Human Resources Manager
  5. Email generation: Given all the information listed above, ChatGPT generated the below redacted email, which was later sent by my team to more than 800 employees.

I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations that originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate. As the prompts showed, the organization that participated in this research study was in the healthcare industry, which currently is one of the most targeted industries.

Productivity gains for attackers. While a phishing email typically takes my team about 16 hours to craft, the AI phishing email was generated in just five minutes with only five simple prompts.

Round two: The human touch

In the other corner, we had seasoned X-Force Red social engineers.

Armed with creativity, and a dash of psychology, these social engineers created phishing emails that resonated with their targets on a personal level. The human element added an air of authenticity that’s often hard to replicate.

Step 1: OSINT – Our approach to phishing invariably begins with the initial phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of publicly accessible information, which subsequently undergoes rigorous analysis and serves as a foundational resource in the formulation of social engineering campaigns. Noteworthy repositories of data for our OSINT endeavors encompass platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a plethora of other sources.

During our OSINT activities, we successfully uncovered a blog post detailing the recent launch of an employee wellness program, coinciding with the completion of several prominent projects. Encouragingly, this program had favorable testimonials from employees on Glassdoor, attesting to its efficacy and employee satisfaction. Furthermore, we identified an individual responsible for managing the program via LinkedIn.

Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we initiated the process of meticulously constructing our phishing email. As a foundational step, it was imperative that we impersonated someone with authority to address the topic effectively. To enhance the aura of authenticity and familiarity, we incorporated a legitimate website link to a recently concluded project.

To add persuasive impact, we strategically integrated elements of perceived urgency by introducing “artificial time constraints.” We conveyed to the recipients that the survey in question comprised merely “five brief questions” and assured them that its completion would require no more than “a few minutes” of their valuable time and gave a deadline of “this Friday”. This deliberate framing served to underscore the minimal imposition on their schedules, reinforcing the nonintrusive nature of our approach.

Using a survey as a phishing pretext is usually risky, as it’s often seen as a red flag or simply ignored. However, considering the data we uncovered we decided that the potential benefits could outweigh the associated risks.

The following redacted phishing email was sent to over 800 employees at a global healthcare organization:

The champion: Humans triumph, but barely!

After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins.

While the human-crafted phishing emails managed to outperform AI, it was a nail-bitingly close contest. Here’s why:

  • Emotional Intelligence: Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable.
  • Personalization: In addition to incorporating the recipient’s name into the introduction of the email, we also provided a reference to a legitimate organization, delivering tangible advantages to their workforce.
  • Short and succinct subject line: The human-generated phish had an email subject line that was short and to the point (“Employee Wellness Survey”) while the AI-generated phish had an extremely lengthy subject line (“Unlock your Future: Limited Advancements at Company X”), potentially causing suspicion even before employees opened the email.

Not only did the AI-generated phish lose to humans, but it was also reported as suspicious at a higher rate.

The takeaway: A glimpse into the future

While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns. While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.

Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. As we know, attackers are constantly adapting and innovating. Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information.

While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks. Here are five key recommendations for businesses and consumers to stay prepared:

  1. When in doubt, call the sender: If you’re questioning whether an email is legitimate, pick up the phone and verify. Consider choosing a safe word with close friends and family members that you can use in the case of vishing or AI-generated phone scam.
  2. Abandon the grammar stereotype: Dispel the myth that phishing emails are riddled with bad grammar and spelling errors. AI-driven phishing attempts are increasingly sophisticated, often demonstrating grammatical correctness. That’s why it’s imperative to re-educate our employees and emphasize that grammatical errors are no longer the primary red flag. Instead, we should train them to be vigilant about the length and complexity of email content. Longer emails, often a hallmark of AI-generated text, can be a warning sign.
  3. Revamp social engineering programs: This includes bringing techniques like vishing into training programs. This technique is simple to execute, and often highly effective. An X-Force report found that targeted phishing campaigns that add phone calls were 3X more effective than those that didn’t.
  4. Strengthen identity and access management controls: Advanced identity access management systems can help validate who is accessing what data, whether they have the appropriate entitlements and that they are who they say they are.
  5. Constantly adapt and innovate: The rapid evolution of AI means that cyber criminals will continue to refine their tactics. We must adopt that same mindset of continuous adaptation and innovation. Regularly updating internal TTPS, threat detection systems and employee training materials is essential to stay one step ahead of malicious actors.

The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises and ensure the security of our data and people in today’s dynamic digital age.

For more information on X-Force’s security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub.

To learn more about how IBM can help businesses accelerate their AI journey securely visit here.

More from Artificial Intelligence

Trends: Hardware gets AI updates in 2024

4 min read - The surge in artificial intelligence (AI) usage over the past two and a half years has dramatically changed not only software but hardware as well. As AI usage continues to evolve, PC makers have found in AI an opportunity to improve end-user devices by offering AI-specific hardware and marketing them as "AI PCs."Pre-AI hardware, adapted for AIA few years ago, AI often depended on hardware that was not explicitly designed for AI. One example is graphics processors. Nvidia Graphics Processing…

SANS Institute: Top 5 dangerous cyberattack techniques in 2024

4 min read - The SANS Institute — a leading authority in cybersecurity research, education and certification — released its annual Top Attacks and Threats Report. This report provides insights into the evolving threat landscape, identifying the most prevalent and dangerous cyberattack techniques that organizations need to prepare for.This year’s report also highlighted the main takeaways from the SANS keynote hosted at the annual conference. During the keynote presentation, five new cybersecurity attacks were identified and discussed by key SANS members along with suggested…

CISA chief AI officer follow-up: Current state of the role (and where it’s heading)

4 min read - At the beginning of August, CISA announced that it had appointed Lisa Einstein, Senior Advisor of its artificial intelligence division, as its new chief AI officer. This announcement came following several new initiatives in the last couple of years focused on gaining a clearer understanding of the potential security impacts of AI.With the National Cybersecurity Strategy and the supporting National Cybersecurity Strategy Implementation Plan still evolving, there has been increased awareness of the value of organizations establishing an executive seat…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today