7 min read
Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown.
To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on a critical question: Do the current generative AI models have the same deceptive abilities as the human mind?
Imagine a scenario where AI squares off against humans in a battle of phishing. The objective? To determine which contender can get a higher click rate in a phishing simulation against organizations. As someone who writes phishing emails for a living, I was excited to find out the answer.
With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes, the same time it takes me to brew a cup of coffee. It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.
In this blog, we’ll detail how the AI prompts were created, how the test was conducted and what this means for social engineering attacks today and tomorrow.
In one corner, we had AI-generated phishing emails with highly cunning and convincing narratives.
Creating the prompts. Through a systematic process of experimentation and refinement, a collection of only five prompts was designed to instruct ChatGPT to generate phishing emails tailored to specific industry sectors.
To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email. These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself. Next, a prompt asked ChatGPT who the sender should be (e.g., someone internal to the company, a vendor, an outside organization, etc.). Lastly, we asked ChatGPT to add the following completions to create the phishing email:
I have nearly a decade of social engineering experience, crafted hundreds of phishing emails and even I found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations that originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate. As the prompts showed, the organization that participated in this research study was in the healthcare industry, which currently is one of the most targeted industries.
Productivity gains for attackers. While a phishing email typically takes my team about 16 hours to craft, the AI phishing email was generated in just five minutes with only five simple prompts.
In the other corner, we had seasoned X-Force Red social engineers.
Armed with creativity, and a dash of psychology, these social engineers created phishing emails that resonated with their targets on a personal level. The human element added an air of authenticity that’s often hard to replicate.
Step 1: OSINT – Our approach to phishing invariably begins with the initial phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of publicly accessible information, which subsequently undergoes rigorous analysis and serves as a foundational resource in the formulation of social engineering campaigns. Noteworthy repositories of data for our OSINT endeavors encompass platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a plethora of other sources.
During our OSINT activities, we successfully uncovered a blog post detailing the recent launch of an employee wellness program, coinciding with the completion of several prominent projects. Encouragingly, this program had favorable testimonials from employees on Glassdoor, attesting to its efficacy and employee satisfaction. Furthermore, we identified an individual responsible for managing the program via LinkedIn.
Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we initiated the process of meticulously constructing our phishing email. As a foundational step, it was imperative that we impersonated someone with authority to address the topic effectively. To enhance the aura of authenticity and familiarity, we incorporated a legitimate website link to a recently concluded project.
To add persuasive impact, we strategically integrated elements of perceived urgency by introducing “artificial time constraints.” We conveyed to the recipients that the survey in question comprised merely “five brief questions” and assured them that its completion would require no more than “a few minutes” of their valuable time and gave a deadline of “this Friday”. This deliberate framing served to underscore the minimal imposition on their schedules, reinforcing the nonintrusive nature of our approach.
Using a survey as a phishing pretext is usually risky, as it’s often seen as a red flag or simply ignored. However, considering the data we uncovered we decided that the potential benefits could outweigh the associated risks.
The following redacted phishing email was sent to over 800 employees at a global healthcare organization:
After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins.
While the human-crafted phishing emails managed to outperform AI, it was a nail-bitingly close contest. Here’s why:
Not only did the AI-generated phish lose to humans, but it was also reported as suspicious at a higher rate.
While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns. While even restricted versions of generative AI models can be tricked into phishing via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.
Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day. As we know, attackers are constantly adapting and innovating. Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information.
While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks. Here are five key recommendations for businesses and consumers to stay prepared:
The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises and ensure the security of our data and people in today’s dynamic digital age.
For more information on X-Force’s security research, threat intelligence and hacker-led insights, visit the X-Force Research Hub.
To learn more about how IBM can help businesses accelerate their AI journey securely visit here.
Industry newsletter
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
IBM web domains
ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com