July 16, 2024 By Limor Kessem 5 min read

Generative artificial intelligence (Gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look at as intently is AI security.

The IBM Institute for Business Value (IBV) surveyed executives to learn about their awareness and approach to AI security. The survey found that only 24% of Gen AI projects have a security component. These results show that AI implementations are proliferating while AI security and governance controls are lagging.

This concerning statistic is likely not limited to AI implementations. As with any security program, organizations that lag on foundational security are often ill-prepared to address threats and attacks that can impact their Gen AI applications.

Mounting concerns around disruption and data impact

The same IBV survey found that most executives are aware of threats and incidents that can affect AI initiatives. Respondents expressed concerns regarding their adoption of Gen AI, with more than half (56%) saying they fear the increased potential for business disruption. No less than 96% said that adopting Gen AI makes a security breach likely to happen in their organization within the next three years.

With the likelihood of attackers targeting AI initiatives rising, breaches and disruptions are likely to find organizations unprepared unless concerns about risk are translated into actionable plans.

Incident response planning and drilling can benefit from added attention, even for technical teams. About one in five companies have response plans in place; the numbers are lower for those who also have executive/strategic plans in place. The statistics can become alarming, considering the stakes for breaches involving AI-related data are higher — mainly due to the volume of data involved, sensitive classification and the attack surface when interacting with third-party platforms, models and infrastructure.

Impactful threats are already here

Are you expecting AI security to feature more exotic threats that will take attackers time to figure out? Impactful AI threats already exist. While it is true that a set of fundamentally new threats to organizations’ Gen AI initiatives is emerging, most are related to existing threats or the same issues with new vulnerabilities and exposure to consider. As such, costly disruption can come from familiar places in the technology stack.

Infrastructure, applications, application programming interfaces (APIs) and unprotected data are common places where attackers target organizations daily. These targets promise high rewards for stolen sensitive data, personally identifiable information (PII) and intellectual property. High-impact attacks emerge too often due to supply chain compromise and collateral damage, and these are more likely to occur when using external data sources, third-party and open-source models and APIs.

For an attacker, managing to compromise shared models is a “hack once, hit all” jackpot without needing sophisticated skills to achieve success.

Explore AI cybersecurity solutions

Is your security team prepared for an AI compromise?

Gen AI can greatly benefit the organization, and innovation takes precedence over AI-related security. Gen AI initiatives are being left unsecured in most cases, and attacks are likelier than ever to affect AI implementations.

Considering these factors, is your security team preparing to handle compromises that can impact your AI workloads? What sort of thinking is going into detecting attacks that might have impacted your models? What about data being moved around and used by your Gen AI initiatives in a public cloud? Beyond controls, are response plans being drafted to contain, find the root cause of and remedy an AI compromise?

Regarding AI initiatives, let’s not forget that a lot is happening in the cloud, where visibility can be siloed. The cloud is where security and response models are a shared responsibility with providers. Can you say the security and other teams working on AI initiatives know how these two aspects will be managed during an AI-related breach? Are there clear plans for each type of active use case involving relevant stakeholders? Preparing to activate your third party’s support in an AI-related crisis will prove critical.

The view from executive suites

Let’s assume that the security and technical teams involved in AI implementations have plans to detect, contain and recover from an AI compromise. How prepared is the C-suite to do the same in other business aspects? A major compromise of AI models, data or infrastructure can cause significant disruption without a clear timeline for recovery. Such an attack can quickly escalate to a crisis-level event that will require executives to take over and lead the response.

The AI-related impact is as varied as the countless use cases that organizations implement and can differ across sectors. Consider the implications for AI-enabled industrial plant operations, web services that use AI-enabled assistants or AI-enhanced fraud detection. A cyberattack can initially cause disruption to these programs, but the resulting business impact requires executive decisions about managing it, prioritizing recovery per real-time impact analyses and implementing the leader’s intent throughout the event.

Cyberattacks are notorious for causing impact through unauthorized access to sensitive data. Organizations may seem well-prepared to deal with that if they have to, but just as some threats have new AI twists, so do response demands. Are your data protection officer and compliance team equipped with plans to address new, AI-specific regulatory demands? Think of a scenario where a central model has been poisoned by a cyberattacker, causing unintended bias against specific sets of individuals — what is the strategic thinking going into detecting, remedying, supporting and compensating affected communities?

How about other response scenarios? For example, the controlled takedown of AI-enabled services or operations, adapting the eDiscovery legal process to data used by AI systems, communicating about the breach in a way that preserves customer loyalty and reputation, assessing specific legal ramifications and more. Security teams are often not skilled at managing these types of issues.

Prepare for Gen AI disruption on every organizational level

There is much to unpack in terms of leadership and cross-functional support for AI-related breaches. Decisions concerning direction and top-level policies best connect with business objectives when they are delivered in a top-down approach.

Suppose your organization is already implementing Gen AI use cases. In that case, the executive team and board members should also emphasize preparedness for leading through AI-related cyber crises. Getting on the right track does not have to be complicated. It begins by reviewing organizational governance related to major cyberattacks and getting C-suite support for an AI-related preparedness and response plan.

Figure 1: IBM framework for securing Gen AI

Once plans are in place for defining thresholds, crisis leadership, integrating response streams and covering the likeliest scenarios, the next step would be to drill a high-impact AI compromise and test the organization’s ability to manage an effective response.

According to the Cost of a Data Breach report, organizations that regularly drill their incident response can considerably reduce the duration and costs of breaches and can better withstand disruption.

Build preparedness through planning for AI disruption

A significant cyberattack, especially one involving your AI implementations, can escalate quickly to significantly affect operations, brand, reputation and financial standing. It can even threaten the very existence of the enterprise. Building preparedness is an essential, recurring activity that can help reduce the impact of crisis-level attacks.

Start by building plans for your technical and executive teams, with corresponding roles and action plans as well as shared escalation paths and criteria. Once a plan is in place, develop playbooks for your most impactful AI-related disruption scenarios to guide teams through activities to detect, contain, eradicate and launch the most effective recovery strategy.

Don’t neglect to build key performance indicators into your plans and develop a lessons-learned process that’s robust enough to provide learnings across the organization. These lessons learned can be used as a solid reference to evolve plans over time.

IBM X-Force is here to help. Our proactive experts can help you make plans that follow industry standards and best practices. You can rely on X-Force’s vast experience gained through countless response engagements across all sectors.

Detailed plans for technical teams: IBM X-Force Incident Response specializes in incident preparedness, detection, response and recovery. Through planning and testing, our goal is to reduce the business impact of a breach and improve resiliency to attacks. Schedule a discovery briefing with our X-Force team to discuss technical response planning for an AI-related compromise.

Strategic planning for executive teams: If your team wants to build executive plans and playbooks for an AI-related compromise, check out X-Force Cyber Crisis Management.

More from Artificial Intelligence

How cyber criminals are compromising AI software supply chains

3 min read - With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.No matter whether you use…

How to embrace Secure by Design principles while adopting AI

5 min read - The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are vast and impactful. According to a recent IBM report, approximately 42% of large enterprises have adopted AI, with the technology capable of automating up to…

Cost of data breaches: The business case for security AI and automation

3 min read - As Yogi Berra said, “It’s déjà vu all over again.” If the idea of the global average costs of data breaches rising year over year feels like more of the same, that's because it is. Data protection solutions get better, but so do threat actors. The other broken record is the underuse or misuse of technologies that can help safeguard data, such as artificial intelligence and automation.IBM’s 2024 Cost of a Data Breach (CODB) Report studied 604 organizations across 17…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today