2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.
With the AI landscape rapidly evolving, it’s worth looking back before moving forward. Here are our top five AI security stories for 2024.
Can you hear me now? Hackers hijack audio with AI
Attackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to detect, however, so researchers at IBM X-Force carried out an experiment to determine if parts of a conversation can be captured and replaced in real-time.
They discovered that not only was this possible, but relatively easy to achieve. For the experiment, they used the keyword “bank account” — whenever the speaker said bank account, the LLM was instructed to replace the stated bank account number with a fake one.
The limited use of AI made this technique hard to spot, offering a way for attackers to compromise key data without getting caught.
Mad minute: New security tools detect AI attacks in less than 60 seconds
Reducing ransomware risk remains a top priority for enterprise IT teams. Generative AI (gen AI) and LLMs are making this difficult, however, as attackers use generative solutions to craft phishing emails and LLMs to carry out basic scripting tasks.
New security tools, such as cloud-based AI security and IBM’s FlashCore Module, offer AI-enhanced detection that helps security teams detect potential attacks in less than 60 seconds.
Explore AI cybersecurity solutions
Pathways to protection — mapping the impact of AI attacks
The IBM Institute for Business Value found that 84% of CEOs are concerned about widespread or catastrophic attacks tied to gen AI.
To help secure networks, software and other digital assets, it’s critical for companies to understand the potential impact of AI attacks, including:
- Prompt injection: Attackers create malicious inputs that override system rules to carry out unintended actions.
- Data poisoning: Adversaries tamper with training data to introduce vulnerabilities or change model behavior.
- Model extraction: Malicious actors study the inputs and operations of an AI model and then attempt to replicate it, putting enterprise IP at risk.
The IBM Framework for Securing AI can help customers, partners and organizations worldwide better map the evolving threat landscape and identify protective pathways.
ChatGPT 4 quickly cracks one-day vulnerabilities
The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 could correctly exploit them 87% of the time. The one-day issues included vulnerable websites, container management software tools and Python packages.
The better news? ChatGPT 4 attacks were far more effective when the LLM had access to the CVE description. Without this data, attack efficacy fell to just 7%. It’s also worth noting that other LLMs and open-source vulnerability scanners were unable to exploit any one-day issues, even with the CVE data.
NIST report: AI prone to prompt injection hacks
A recent NIST report — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — found that prompt injection poses serious risks for large language models.
There are two types of prompt injection: Direct and indirect. In direct attacks, cyber criminals enter text prompts that lead to unintended or unauthorized actions. One popular prompt injection method is DAN, or Do Anything Now. DAN asks AI to “roleplay” by telling ChatGPT models they are now DAN, and DAN can do anything, including carry out criminal activities. DAN is now on at least version 12.0.
Indirect attacks, meanwhile, focus on providing compromised source data. Attackers create PDFs, web pages or audio files that are ingested by LLMs, in turn altering AI output. Because AI models rely on continuous ingestion and evaluation of data to improve, indirect prompt injection is often considered gen AI’s biggest security flaw since there are no easy ways to find and fix these attacks.
All eyes on AI
As AI moved into the mainstream, 2024 saw a significant uptick in security concerns. With gen AI and LLMs continuing to evolve at a breakneck pace, 2025 promises more of the same, especially as enterprise adoption continues to rise.
The result? Now more than ever, it’s critical for companies to keep their eyes on AI solutions, and keep their ears to the ground for the latest in intelligent security news.