The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are vast and impactful. According to a recent IBM report, approximately 42% of large enterprises have adopted AI, with the technology capable of automating up to 30% of knowledge work activities in various sectors, including sales, marketing, finance and customer service.

However, the accelerated adoption of gen AI also brings significant risks, such as inaccuracy, intellectual property concerns and cybersecurity threats. Of course, this is only one instance in a series of enterprises adopting new technology, such as cloud computing, only to realize afterward that incorporating security principles should have been a priority from the start. Now, we can learn from those past missteps and adopt Secure by Design principles early while developing gen AI-based enterprise applications.

Lessons from the cloud transformation rush

The recent wave of cloud adoption provides valuable insights into prioritizing security early in any technology transition. Many organizations embraced cloud technologies for benefits like cost reduction, scalability and disaster recovery. However, the haste to reap these benefits often led to oversights in security, resulting in high-profile breaches due to misconfigurations. The following chart shows the impact of these misconfigurations. It illustrates the cost and frequency of data breaches by initial attack vector, where cloud misconfigurations are shown to have a significant average cost of $3.98 million:

Figure 1: Measured in USD millions; percentage of all breaches (IBM Cost of a Data Breach report 2024)

One notable incident occurred in 2023: A misconfigured cloud storage bucket exposed sensitive data from multiple companies, including personal information like email addresses and social security numbers. This breach highlighted the risks associated with improper cloud storage configurations and the financial impact due to reputational damage.

Similarly, a vulnerability in an enterprise workspace Software-as-a-Service (SaaS) application resulted in a major data breach in 2023, where unauthorized access was gained through an unsecured account. This brought to light the impact of inadequate account management and monitoring. These incidents, among many others (captured in the recently published IBM Cost of a Data Breach Report 2024), underline the critical need for a Secure by Design approach, ensuring that security measures are integral to these AI adoption programs from the very beginning.

Need for early security measures in AI transformational programs

As enterprises rapidly integrate gen AI into their operations, the importance of addressing security from the beginning cannot be overstated. AI technologies, while transformative, introduce new security vulnerabilities. Recent breaches related to AI platforms demonstrate these risks and their potential impact on businesses.

Here are some examples of AI-related security breaches in the last couple of months:

1. Deepfake scams: In one case, a UK energy firm’s CEO was duped into transferring $243,000, believing he was speaking with his boss. The scam utilized deepfake technology, highlighting the potential for AI-driven fraud.

2. Data poisoning attacks: Attackers can corrupt AI models by introducing malicious data during training, leading to erroneous outputs. This was seen when a cybersecurity firm’s machine learning model was compromised, causing delays in threat response.

3. AI model exploits: Vulnerabilities in AI applications, such as chatbots, have led to many incidents of unauthorized access to sensitive data. These breaches underscore the need for robust security measures around AI interfaces.

Business implications of AI security breaches

The consequences of AI security breaches are multifaceted:

  • Financial losses: Breaches can result in direct financial losses and significant costs related to mitigation efforts
  • Operational disruption: Data poisoning and other attacks can disrupt operations, leading to incorrect decisions and delays in addressing threats
  • Reputational damage: Breaches can damage a company’s reputation, eroding customer trust and market share

As enterprises rapidly adopt their customer-facing applications to adopt gen AI technologies, it is important to have a structured approach to securing them to reduce the risk of having their businesses interrupted by cyber adversaries.

A three-pronged approach to securing gen AI applications

To effectively secure gen AI applications, enterprises should adopt a comprehensive security strategy that spans the entire AI lifecycle. There are three key stages:

1. Data collection and handling: Ensure the secure collection and handling of data, including encryption and strict access controls.

2. Model development and training: Implement secure practices during development, training and fine-tuning of AI models to protect against data poisoning and other attacks.

3. Model inference and live use: Monitor AI systems in real-time and ensure continuous security assessments to detect and mitigate potential threats.

These three stages should be considered alongside the Shared Responsibility model of a typical cloud-based AI platform (shown below).

Figure 2: Secure gen AI usage – Shared Responsibility matrix

In the IBM Framework for Securing Generative AI, you can find a detailed description of these three stages and security principles to follow. They are combined with cloud security controls at the underlying infrastructure layer, which runs large language models and applications.

Figure 3: IBM Framework for securing generative AI

Balancing progress with security

The transition to gen AI enables enterprises to fuel innovation in their business applications, automate complex tasks and improve efficiency, accuracy and decision-making while reducing costs and increasing the speed and agility of their business processes.

As seen with the cloud adoption wave, prioritizing security from the beginning is crucial. By incorporating security measures into the AI adoption process early on, enterprises can convert past missteps into critical milestones and protect themselves from sophisticated cyber threats. This proactive approach ensures compliance with rapidly evolving AI regulatory requirements, protects enterprises and their client’s sensitive data and maintains the trust of stakeholders. This way, businesses can achieve their AI strategic goals securely and sustainably.

How IBM can help

IBM offers comprehensive solutions to support enterprises in securely adopting AI technologies. Through consulting, security services and a robust AI security framework, IBM is helping organizations build and deploy AI applications at scale, ensuring transparency, ethics and compliance. IBM’s AI Security Discovery workshops are a critical first step, helping clients identify and mitigate security risks early in their AI adoption journey.

For more information, please check out these resources:

More from Artificial Intelligence

How I got started: AI security executive

3 min read - Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a dance major but was also working for NASA as a hardware IT engineer, which forged her path into AI and cybersecurity.Where did you go to…

ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive?

2 min read - After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute…

How cyber criminals are compromising AI software supply chains

3 min read - With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.No matter whether you use…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today