As organizations embrace generative AI, there are a host of benefits that they are expecting from these projects—from efficiency and productivity gains to improved speed of business to more innovation in products and services. However, one factor that forms a critical part of this AI innovation is trust. Trustworthy AI relies on understanding how the AI works and how it makes decisions.

According to a survey of C-suite executives from the IBM Institute for Business Value, 82% of respondents say secure and trustworthy AI is essential to the success of their business, yet only 24% of current generative AI projects are being secured. This leaves a staggering gap in securing known AI projects. Add to this, the ‘Shadow AI’ present within the organizations, it makes the security gap for AI even more sizable.

Challenges to securing AI deployment

Organizations have a whole new pipeline of projects being built that leverage generative AI. During the data collection and handling phase, you need to collect huge volumes of data to feed the model and you’re providing access to several different people, including data scientists, engineers, developers and others. This inherently presents a risk by centralizing all that data in one place and giving many people access to it. This means that generative AI is a new type of data store that can create new data based on existing organizational data. Whether you trained the model, fine-tuned it, or connected it to a RAG (Vector DB), that data likely has PII, privacy concerns and other sensitive information in it. This mound of sensitive data is a blinking red target that attackers are going to try and get access to.

Within model development, new applications are being built in a brand-new way with new vulnerabilities that become new entry points that attackers will try to exploit. Development often starts with data science teams downloading and repurposing pre-trained open-source machine learning models from online model repositories such as HuggingFace or TensorFlow Hub. Open-source model-sharing repositories have been born out of inherent data science complexity, practitioner shortage, and the value they provide to organizations in dramatically reducing the time and effort required for generative AI adoption. However, such repositories can lack comprehensive security controls, which ultimately pass the risk on to the enterprise—and attackers are counting on it. They can inject a backdoor or malware into one of these models and upload the infected model back into the model-sharing repositories, affecting anyone who downloads it. The general scarcity of security around ML models, coupled with the increasingly sensitive data that ML models are exposed to, means that attacks targeting these models have a high propensity for damage.

And during inferencing and live use, attackers can manipulate prompts to jailbreak guardrails and coax models into misbehaving by generating disallowed responses to harmful prompts including biased, false and other toxic information, inflicting reputational damage. Or, attackers can manipulate the model and analyze input-output pairs to train a surrogate model to mimic the behavior of the target model, effectively “stealing” its capabilities, costing that enterprise its competitive advantage.

Explore AI security solutions

Critical steps to securing AI

Different organizations are using different approaches to securing AI as the standards and frameworks for securing AI evolve. IBM’s framework for securing AI revolves around securing the key tenets of an AI deployment—securing the data, securing the model and securing the usage. In addition, you need to secure the infrastructure on which the AI models are being built and run. And they need to establish AI governance and monitor for fairness, bias and drift over time—all in a continuous manner to keep track of any changes or model drift.

  • Securing the data: Organizations will need to centralize and collate massive amounts of data to get the most out of gen AI and max out its value. Whenever you start combining and centralizing your crown jewels in one place, you expose yourself to significant risk, so you need to have a data security plan to identify and protect sensitive data.
  • Securing the model: Many organizations are downloading models from open sources to accelerate development efforts. Data scientists are downloading these black box models with no visibility into how they work. Attackers have the same access to these online model repositories and can deploy a backdoor or malware into one of these models and upload them back into the repository as an entry point to anyone who downloads the infected model. You need to understand the vulnerabilities and misconfigurations in the deployment.
  • Secure the usage: Organizations need to ensure safe usage of AI deployment. Threat actors could execute a prompt injection where they use malicious prompts to jailbreak models, get unwarranted access, steal sensitive data or bias outputs. Attackers can also craft inputs to collect model outputs, accumulating a large dataset of input-output pairs to train a surrogate model to mimic the behavior of the target model, effectively “stealing” its capabilities. You need to understand the usage of the model and map it with assessment frameworks to ensure safe usage.

And all this needs to be done while maintaining regulatory compliance.

Introducing IBM Guardium AI Security

As organizations work with existing threats and the growing cost of data breaches, securing AI will be a big initiative—and one where many organizations will need support. To help organizations use secure and trustworthy AI, IBM has launched IBM Guardium AI Security. Building on decades of experience in data security with IBM Guardium, this new offering allows organizations to secure their AI deployment.

It allows you to manage security risk and vulnerabilities of sensitive AI data and AI models. It helps you identify and fix vulnerabilities in the AI model and protect sensitive data. Continuously monitor for AI misconfiguration, detect data leakage and optimize access control—with a trusted leader in data security.

Part of this new offering is the IBM Guardium Data Security Center, which empowers security and AI teams to collaborate across the organization through integrated workflows, a common view of data assets and centralized compliance policies.

Securing AI is a journey and requires collaboration across cross-functional teams—security teams, risk and compliance teams, and the AI teams—and organizations need to work through a programmatic approach to secure their AI deployment.

See how Guardium AI Security can help your organization, and sign up for our webinar to learn more.

More from Data Protection

SpyAgent malware targets crypto wallets by stealing screenshots

4 min read - A new Android malware strain known as SpyAgent is making the rounds — and stealing screenshots as it goes. Using optical character recognition (OCR) technology, the malware is after cryptocurrency recovery phrases often stored in screenshots on user devices.Here's how to dodge the bullet.Attackers shooting their (screen) shotAttacks start — as always — with phishing efforts. Users receive text messages prompting them to download seemingly legitimate apps. If they take the bait and install the app, the SpyAgent malware gets…

Exploring DORA: How to manage ICT incidents and minimize cyber threat risks

3 min read - As cybersecurity breaches continue to rise globally, institutions handling sensitive information are particularly vulnerable. In 2024, the average cost of a data breach in the financial sector reached $6.08 million, making it the second hardest hit after healthcare, according to IBM's 2024 Cost of a Data Breach report. This underscores the need for robust IT security regulations in critical sectors.More than just a defensive measure, compliance with security regulations helps organizations reduce risk, strengthen operational resilience and enhance customer trust.…

Skills shortage directly tied to financial loss in data breaches

2 min read - The cybersecurity skills gap continues to widen, with serious consequences for organizations worldwide. According to IBM's 2024 Cost Of A Data Breach Report, more than half of breached organizations now face severe security staffing shortages, a whopping 26.2% increase from the previous year.And that's expensive. This skills deficit adds an average of $1.76 million in additional breach costs.The shortage spans both technical cybersecurity skills and adjacent competencies. Cloud security, threat intelligence analysis and incident response capabilities are in high demand. Equally…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today