December 12, 2024 By Doug Bonderud 3 min read

2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.

With the AI landscape rapidly evolving, it’s worth looking back before moving forward. Here are our top five AI security stories for 2024.

Can you hear me now? Hackers hijack audio with AI

Attackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to detect, however, so researchers at IBM X-Force carried out an experiment to determine if parts of a conversation can be captured and replaced in real-time.

They discovered that not only was this possible, but relatively easy to achieve. For the experiment, they used the keyword “bank account” — whenever the speaker said bank account, the LLM was instructed to replace the stated bank account number with a fake one.

The limited use of AI made this technique hard to spot, offering a way for attackers to compromise key data without getting caught.

Mad minute: New security tools detect AI attacks in less than 60 seconds

Reducing ransomware risk remains a top priority for enterprise IT teams. Generative AI (gen AI) and LLMs are making this difficult, however, as attackers use generative solutions to craft phishing emails and LLMs to carry out basic scripting tasks.

New security tools, such as cloud-based AI security and IBM’s FlashCore Module, offer AI-enhanced detection that helps security teams detect potential attacks in less than 60 seconds.

Explore AI cybersecurity solutions

Pathways to protection — mapping the impact of AI attacks

The IBM Institute for Business Value found that 84% of CEOs are concerned about widespread or catastrophic attacks tied to gen AI.

To help secure networks, software and other digital assets, it’s critical for companies to understand the potential impact of AI attacks, including:

  • Prompt injection: Attackers create malicious inputs that override system rules to carry out unintended actions.
  • Data poisoning: Adversaries tamper with training data to introduce vulnerabilities or change model behavior.
  • Model extraction: Malicious actors study the inputs and operations of an AI model and then attempt to replicate it, putting enterprise IP at risk.

The IBM Framework for Securing AI can help customers, partners and organizations worldwide better map the evolving threat landscape and identify protective pathways.

ChatGPT 4 quickly cracks one-day vulnerabilities

The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 could correctly exploit them 87% of the time. The one-day issues included vulnerable websites, container management software tools and Python packages.

The better news? ChatGPT 4 attacks were far more effective when the LLM had access to the CVE description. Without this data, attack efficacy fell to just 7%. It’s also worth noting that other LLMs and open-source vulnerability scanners were unable to exploit any one-day issues, even with the CVE data.

NIST report: AI prone to prompt injection hacks

A recent NIST report — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — found that prompt injection poses serious risks for large language models.

There are two types of prompt injection: Direct and indirect. In direct attacks, cyber criminals enter text prompts that lead to unintended or unauthorized actions. One popular prompt injection method is DAN, or Do Anything Now. DAN asks AI to “roleplay” by telling ChatGPT models they are now DAN, and DAN can do anything, including carry out criminal activities. DAN is now on at least version 12.0.

Indirect attacks, meanwhile, focus on providing compromised source data. Attackers create PDFs, web pages or audio files that are ingested by LLMs, in turn altering AI output. Because AI models rely on continuous ingestion and evaluation of data to improve, indirect prompt injection is often considered gen AI’s biggest security flaw since there are no easy ways to find and fix these attacks.

All eyes on AI

As AI moved into the mainstream, 2024 saw a significant uptick in security concerns. With gen AI and LLMs continuing to evolve at a breakneck pace, 2025 promises more of the same, especially as enterprise adoption continues to rise.

The result? Now more than ever, it’s critical for companies to keep their eyes on AI solutions, and keep their ears to the ground for the latest in intelligent security news. 

More from Artificial Intelligence

ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers

4 min read - AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills.It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.Budgets…

Preparing for the future of data privacy

4 min read - The focus on data privacy started to quickly shift beyond compliance in recent years and is expected to move even faster in the near future. Not surprisingly, the Thomson Reuters Risk & Compliance Survey Report found that 82% of respondents cited data and cybersecurity concerns as their organization’s greatest risk. However, the majority of organizations noticed a recent shift: that their organization has been moving from compliance as a “check the box” task to a strategic function.With this evolution in…

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today