June 14, 2024 By Jonathan Reed 4 min read

How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.

The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.

CISA Director Jen Easterly said, “We don’t have a cyber problem, we have a technology and culture problem. Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And no place in technology reveals the obsession with speed to market more than generative AI.

AI training sets ingest massive amounts of valuable and sensitive data, which makes AI models a juicy attack target. Organizations cannot afford to bring unsecured AI into their environments, but they can’t do without the technology either.

To bridge the gap between the need for AI and its inherent risks, it’s imperative to establish a solid framework to direct AI security and model use. To help meet this need, IBM recently announced its Framework for Securing Generative AI. Let’s see how a well-developed framework can help you establish solid AI cybersecurity.

Securing the AI pipeline

A generative AI framework should be designed to help customers, partners and organizations to understand the likeliest attacks on AI. From there, defensive approaches can be prioritized to quickly secure generative AI initiatives.

Securing the AI pipeline involves five areas of action:

  1. Securing the data: How data is collected and handled
  2. Securing the model: AI model development and training
  3. Securing the usage: AI model inference and live use
  4. Securing AI model infrastructure
  5. Establishing sound AI governance

Now, let’s see how each area is oriented to address AI security threats.

Learn more about AI cybersecurity

1. Secure the AI data

Hungry AI models consume massive amounts of data, which data scientists, engineers and developers will access for development purposes. However, developers might not have security high on their list of priorities. If mishandled, your sensitive data and critical intellectual property (IP) could end up exposed.

In AI model attacks, exfiltration of underlying data sets is likely to be one of the most common attack scenarios. Therefore, security fundamentals are the first line of defense to protect these data sets. AI security fundamentals include:

2. Secure the AI model

When developing AI applications, data scientists frequently use pre-existing, freely available machine learning (ML) models sourced from online repositories. However, like any open-source library, security is frequently not built in.

Every organization must consider the AI security risks versus the benefits of accelerated model development. However, without proper AI model security, the downside risk can be significant. Remember, hackers have access to online repositories as well, and backdoors or malware can be injected into open-source models. Any organization that downloads an infected model is wide open to attack.

Furthermore, API-enabled large language models (LLMs) present a similar risk. Hackers can target API interfaces to access and exploit data being transported across the APIs. And LLM agents or plug-ins with excessive permissions further increase the risk for compromise.

To secure AI models, organizations should:

3. Secure the AI usage

When AI models first became widely available, waves of users rushed to test the platforms. It wasn’t long before hackers were able to trick the models into ignoring guardrails and generate biased, false or even dangerous responses. All this can lead to reputational damage and increase the risk of costly legal headaches.

Attackers can also attempt to analyze input/output pairs and train a surrogate model to mimic the behavior of your organization’s AI model. This means the enterprise can lose its competitive edge. Finally, AI models are also vulnerable to denial of service attacks, where attackers overwhelm the LLM with inputs that degrade the quality of service and ramp up resource use.

Best practices for AI model usage security include:

  • Monitoring for prompt injections
  • Monitoring for outputs containing sensitive data or inappropriate content
  • Detecting and responding to data poisoning, model evasion and model extraction
  • Deploying machine learning detection and response (MLDR), which can be integrated into security operations solutions, such as IBM Security® QRadar®, enabling the ability to deny access and quarantine or disconnect compromised models.

4. Secure the infrastructure

A secure infrastructure must underpin any solid AI cybersecurity strategy. Strengthening network security, refining access control, implementing robust data encryption and deploying vigilant intrusion detection and prevention systems around AI environments are all critical for securing infrastructure that supports AI. Additionally, allocating resources towards innovative security solutions tailored for safeguarding AI assets should be a priority.

5. Establish AI governance

Artificial intelligence governance entails the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.

IBM is an industry leader in AI governance, as shown by its presentation of the IBM Framework for Securing Generative AI. As entities continue to give AI more business process and decision-making responsibility, AI model behavior must be kept in check, monitoring for fairness, bias and drift over time. Whether induced or not, a model that diverges from what it was originally designed to do can introduce significant risk.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today