July 16, 2024 By Limor Kessem 6 min read

Generative artificial intelligence (gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look at as intently is AI security.

The IBM Institute for Business Value (IBV) surveyed executives to learn about their awareness and approach to AI security. The survey found that only 24% of gen AI projects have a security component. These results show that AI implementations are proliferating while AI security and governance controls are lagging.

This concerning statistic is likely not limited to AI implementations. As with any security program, organizations that lag on foundational security are often ill-prepared to address threats and attacks that can impact their gen AI applications.

Mounting concerns around disruption and data impact

The same IBV survey found that most executives are aware of threats and incidents that can affect AI initiatives. Respondents expressed concerns regarding their adoption of gen AI, with more than half (56%) saying they fear the increased potential for business disruption. No less than 96% said that adopting gen AI makes a security breach likely to happen in their organization within the next three years.

With the likelihood of attackers targeting AI initiatives rising, breaches and disruptions are likely to find organizations unprepared unless concerns about risk are translated into actionable plans.

Incident response planning and drilling can benefit from added attention, even for technical teams. About one in five companies have response plans in place; the numbers are lower for those who also have executive/strategic plans in place. The statistics can become alarming, considering the stakes for breaches involving AI-related data are higher — mainly due to the volume of data involved, sensitive classification and the attack surface when interacting with third-party platforms, models and infrastructure.

Impactful threats are already here

Are you expecting AI security to feature more exotic threats that will take attackers time to figure out? Impactful AI threats already exist. While it is true that a set of fundamentally new threats to organizations’ gen AI initiatives is emerging, most are related to existing threats or the same issues with new vulnerabilities and exposure to consider. As such, costly disruption can come from familiar places in the technology stack.

Infrastructure, applications, application programming interfaces (APIs) and unprotected data are common places where attackers target organizations daily. These targets promise high rewards for stolen sensitive data, personally identifiable information (PII) and intellectual property. High-impact attacks emerge too often due to supply chain compromise and collateral damage, and these are more likely to occur when using external data sources, third-party and open-source models and APIs.

For an attacker, managing to compromise shared models is a “hack once, hit all” jackpot without needing sophisticated skills to achieve success.

Explore AI cybersecurity solutions

Is your security team prepared for an AI compromise?

Gen AI can greatly benefit the organization, and innovation takes precedence over AI-related security. Gen AI initiatives are being left unsecured in most cases, and attacks are likelier than ever to affect AI implementations.

Considering these factors, is your security team preparing to handle compromises that can impact your AI workloads? What sort of thinking is going into detecting attacks that might have impacted your models? What about data being moved around and used by your gen AI initiatives in a public cloud? Beyond controls, are response plans being drafted to contain, find the root cause of and remedy an AI compromise?

Regarding AI initiatives, let’s not forget that a lot is happening in the cloud, where visibility can be siloed. The cloud is where security and response models are a shared responsibility with providers. Can you say the security and other teams working on AI initiatives know how these two aspects will be managed during an AI-related breach? Are there clear plans for each type of active use case involving relevant stakeholders? Preparing to activate your third party’s support in an AI-related crisis will prove critical.

The view from executive suites

Let’s assume that the security and technical teams involved in AI implementations have plans to detect, contain and recover from an AI compromise. How prepared is the C-suite to do the same in other business aspects? A major compromise of AI models, data or infrastructure can cause significant disruption without a clear timeline for recovery. Such an attack can quickly escalate to a crisis-level event that will require executives to take over and lead the response.

The AI-related impact is as varied as the countless use cases that organizations implement and can differ across sectors. Consider the implications for AI-enabled industrial plant operations, web services that use AI-enabled assistants or AI-enhanced fraud detection. A cyberattack can initially cause disruption to these programs, but the resulting business impact requires executive decisions about managing it, prioritizing recovery per real-time impact analyses and implementing the leader’s intent throughout the event.

Cyberattacks are notorious for causing impact through unauthorized access to sensitive data. Organizations may seem well-prepared to deal with that if they have to, but just as some threats have new AI twists, so do response demands. Are your data protection officer and compliance team equipped with plans to address new, AI-specific regulatory demands? Think of a scenario where a central model has been poisoned by a cyberattacker, causing unintended bias against specific sets of individuals — what is the strategic thinking going into detecting, remedying, supporting and compensating affected communities?

How about other response scenarios? For example, the controlled takedown of AI-enabled services or operations, adapting the eDiscovery legal process to data used by AI systems, communicating about the breach in a way that preserves customer loyalty and reputation, assessing specific legal ramifications and more. Security teams are often not skilled at managing these types of issues.

Prepare for gen AI disruption on every organizational level

There is much to unpack in terms of leadership and cross-functional support for AI-related breaches. Decisions concerning direction and top-level policies best connect with business objectives when they are delivered in a top-down approach.

Suppose your organization is already implementing gen AI use cases. In that case, the executive team and board members should also emphasize preparedness for leading through AI-related cyber crises. Getting on the right track does not have to be complicated. It begins by reviewing organizational governance related to major cyberattacks and getting C-suite support for an AI-related preparedness and response plan.

Figure 1: IBM framework for securing gen AI

Once plans are in place for defining thresholds, crisis leadership, integrating response streams and covering the likeliest scenarios, the next step would be to drill a high-impact AI compromise and test the organization’s ability to manage an effective response.

According to the Cost of a Data Breach report, organizations that regularly drill their incident response can considerably reduce the duration and costs of breaches and can better withstand disruption.

Build preparedness through planning for AI disruption

A significant cyberattack, especially one involving your AI implementations, can escalate quickly to significantly affect operations, brand, reputation and financial standing. It can even threaten the very existence of the enterprise. Building preparedness is an essential, recurring activity that can help reduce the impact of crisis-level attacks.

Start by building plans for your technical and executive teams, with corresponding roles and action plans as well as shared escalation paths and criteria. Once a plan is in place, develop playbooks for your most impactful AI-related disruption scenarios to guide teams through activities to detect, contain, eradicate and launch the most effective recovery strategy.

Don’t neglect to build key performance indicators into your plans and develop a lessons-learned process that’s robust enough to provide learnings across the organization. These lessons learned can be used as a solid reference to evolve plans over time.

IBM X-Force is here to help. Our proactive experts can help you make plans that follow industry standards and best practices. You can rely on X-Force’s vast experience gained through countless response engagements across all sectors.

Detailed plans for technical teams: IBM X-Force Incident Response specializes in incident preparedness, detection, response and recovery. Through planning and testing, our goal is to reduce the business impact of a breach and improve resiliency to attacks. Schedule a discovery briefing with our X-Force team to discuss technical response planning for an AI-related compromise.

Strategic planning for executive teams: If your team wants to build executive plans and playbooks for an AI-related compromise, check out X-Force Cyber Crisis Management.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today