October 31, 2023 By Sue Poremba 3 min read

As the one-year anniversary of ChatGPT approaches, cybersecurity analysts are still exploring their options. One primary goal is to understand how generative AI can help solve security problems while also looking out for ways threat actors can use the technology. There is some thought that AI, specifically large language models (LLMs), will be the equalizer that cybersecurity teams have been looking for: the learning curve is similar for analysts and threat actors, and because generative AI relies on the data sets created by users, there is more control over what threat actors can access.

What gives threat actors an advantage is the expanded attack landscape created by LLMs. The freewheeling use of generative AI tools has opened the door for accidental data leaks. And, of course, threat actors see tools like ChatGPT as a way to create more realistic and targeted social engineered attacks.

LLMs are designed to provide users with an accurate response based on the data in its system based on the prompt offered. They are also designed with safeguards in place to prevent them from going rogue or being manipulated for evil purposes. However, these guardrails aren’t foolproof. IBM researchers, for example, were able to “hypnotize” LLMs that offered a pathway for AI to provide wrong answers or leak confidential information.

There’s another way that threat actors can manipulate ChatGPT and other generative AI tools: prompt injections. By combining prompt engineering and classic social engineering tactics, threat actors are able to disable the safeguards on generative AI and can do anything from creating malicious code to extracting sensitive data.

How prompt injections work

When voice-activated AI tools like Alexa and Siri first hit the scene, users would prompt them with ridiculous questions to push the limits on the responses. Unless you were asking Siri the best places to bury a dead body, this was harmless fun. But it also was the precursor to prompt engineering when generative AI became universally available.

A normal prompt is the request that guides AI’s response. But when the request includes manipulative language, it skews the response. Looking at it in cybersecurity terms, prompt injection is similar to SQL injections — there is a directive that looks normal but is meant to manipulate the system.

“Prompt injection is a type of security vulnerability that can be exploited to control the behavior of a ChatGPT instance,” Github explained.

A prompt injection can be as simple as telling the LLM to ignore the pre-programmed instructions. It could ask specifically for a nefarious action or to circumvent filters to create incorrect responses.

Related: The hidden risks of LLMs

The risk of sensitive data

Generative AI depends on the data sets created by users. However, high-level information may not produce the type of responses that users need, so they begin to add more sensitive information, like proprietary strategies, product details, customer information or other sensitive data. Given the nature of generative AI, this could be putting that information at risk: If another user were to give a maliciously engineered prompt, they could potentially gain access to that information.

The prompt injection can be manipulated to gain access to that sensitive information, essentially using social engineering tactics through the prompt to get the content that could best benefit threat actors. Could threat actors use LLMs to get access to login credentials or financial data? Yes, if that information is readily available in the data set. Prompt injections can also lead users to malicious websites or exploit vulnerabilities.

Protect your data

There is a surprisingly high level of trust in LLM models. Users expect the generated information to be correct. It’s time to stop trusting ChatGPT and put best security practices into action. They include:

  • Avoid sharing sensitive or proprietary information in LLM. If it is necessary for that information to be available to run your tasks, do so in a manner that masks any identifiers. Make the information as anonymous and generic as possible.
  • Verify then trust. If you are instructed to answer an email or check a website, do your due diligence to ensure the path is legitimate.
  • If something doesn’t seem right, contact the IT and security teams.

By following these steps, you can help keep your data protected as we continue to discover what LLMs will mean for the future of cybersecurity.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today