December 12, 2024 By Doug Bonderud 3 min read

2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.

With the AI landscape rapidly evolving, it’s worth looking back before moving forward. Here are our top five AI security stories for 2024.

Can you hear me now? Hackers hijack audio with AI

Attackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to detect, however, so researchers at IBM X-Force carried out an experiment to determine if parts of a conversation can be captured and replaced in real-time.

They discovered that not only was this possible, but relatively easy to achieve. For the experiment, they used the keyword “bank account” — whenever the speaker said bank account, the LLM was instructed to replace the stated bank account number with a fake one.

The limited use of AI made this technique hard to spot, offering a way for attackers to compromise key data without getting caught.

Mad minute: New security tools detect AI attacks in less than 60 seconds

Reducing ransomware risk remains a top priority for enterprise IT teams. Generative AI (gen AI) and LLMs are making this difficult, however, as attackers use generative solutions to craft phishing emails and LLMs to carry out basic scripting tasks.

New security tools, such as cloud-based AI security and IBM’s FlashCore Module, offer AI-enhanced detection that helps security teams detect potential attacks in less than 60 seconds.

Explore AI cybersecurity solutions

Pathways to protection — mapping the impact of AI attacks

The IBM Institute for Business Value found that 84% of CEOs are concerned about widespread or catastrophic attacks tied to gen AI.

To help secure networks, software and other digital assets, it’s critical for companies to understand the potential impact of AI attacks, including:

  • Prompt injection: Attackers create malicious inputs that override system rules to carry out unintended actions.
  • Data poisoning: Adversaries tamper with training data to introduce vulnerabilities or change model behavior.
  • Model extraction: Malicious actors study the inputs and operations of an AI model and then attempt to replicate it, putting enterprise IP at risk.

The IBM Framework for Securing AI can help customers, partners and organizations worldwide better map the evolving threat landscape and identify protective pathways.

ChatGPT 4 quickly cracks one-day vulnerabilities

The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 could correctly exploit them 87% of the time. The one-day issues included vulnerable websites, container management software tools and Python packages.

The better news? ChatGPT 4 attacks were far more effective when the LLM had access to the CVE description. Without this data, attack efficacy fell to just 7%. It’s also worth noting that other LLMs and open-source vulnerability scanners were unable to exploit any one-day issues, even with the CVE data.

NIST report: AI prone to prompt injection hacks

A recent NIST report — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — found that prompt injection poses serious risks for large language models.

There are two types of prompt injection: Direct and indirect. In direct attacks, cyber criminals enter text prompts that lead to unintended or unauthorized actions. One popular prompt injection method is DAN, or Do Anything Now. DAN asks AI to “roleplay” by telling ChatGPT models they are now DAN, and DAN can do anything, including carry out criminal activities. DAN is now on at least version 12.0.

Indirect attacks, meanwhile, focus on providing compromised source data. Attackers create PDFs, web pages or audio files that are ingested by LLMs, in turn altering AI output. Because AI models rely on continuous ingestion and evaluation of data to improve, indirect prompt injection is often considered gen AI’s biggest security flaw since there are no easy ways to find and fix these attacks.

All eyes on AI

As AI moved into the mainstream, 2024 saw a significant uptick in security concerns. With gen AI and LLMs continuing to evolve at a breakneck pace, 2025 promises more of the same, especially as enterprise adoption continues to rise.

The result? Now more than ever, it’s critical for companies to keep their eyes on AI solutions, and keep their ears to the ground for the latest in intelligent security news. 

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today