October 10, 2023 By Chris McCurdy 3 min read

Generative AI (GenAI) is poised to deliver significant benefits to enterprises and their ability to readily respond to and effectively defend against cyber threats. But AI that is not itself secured may introduce a whole new set of threats to businesses. Today IBM’s Institute for Business Value published “The CEO’s guide to generative AI: Cybersecurity,” part of a larger series providing guidance for senior leaders planning to adopt generative AI models and tools. The materials highlight key considerations for CEOs with respect to the cybersecurity benefits that GenAI can bring, and the potential risks it can introduce, to their enterprises.

The guidance draws on insights from 200 C-suite leaders and reveals that despite substantial concerns over risks, enterprises are moving full steam ahead with GenAI adoption, eager to reap the rewards and efficiencies promised by GenAI innovation. Key highlights include:

  • Innovate first, secure later? Despite nearly all surveyed executives (94%) considering it important to secure AI solutions before deployment, 69% also say innovation takes precedence over security for GenAI.
  • AI security spend moving upwards: By 2025 AI security budgets are expected to be 116% greater than in 2021, with 84% of respondents saying they will prioritize GenAI security solutions over conventional ones.
  • GenAI viewed as a force multiplier for cyber workforce: 92% of surveyed executives say that, instead of being replaced, it is more likely their security workforce will be augmented or elevated to focus on higher-value work.
Register for the AI + Cybersecurity webinar

Generative AI becomes cybersecurity’s next big bet

As business leaders seek to drive more effective cybersecurity capabilities across their environments, they are expecting to spend more on generative AI-driven solutions. The overwhelming majority of survey respondents (84%) say they will prioritize generative AI security solutions over conventional ones, eager to see the promise of these innovations materialize.

The findings further emphasize the productivity gains that AI promises at the human and technology levels. Today’s AI maturity can help security analysts, empowering them to do more with less through intelligent assistants and speedier, more intuitive detection and response tools.

Survey respondents largely agreed that their workforce is the top area that would benefit from GenAI for security capabilities, with 52% of respondents saying generative AI solutions will positively impact their ability to develop and retrain security talent — an essential requirement amid an ever-evolving threat landscape. The majority of surveyed executives also viewed GenAI as an accelerator of digital trust with 52% indicating that GenAI will help establish easier user access management, permissions, and entitlement across their organizations. Similarly, 47% of executives say GenAI will help improve the time to detect and respond to cyber threats.

Explore the study

Generative AI adoption outpaces security and governance

Despite nearly all executives agreeing that it’s important to secure AI solutions before deployment, 69% say innovation takes precedence over security for GenAI. Rather than incorporating security considerations into innovation efforts, business leaders appear to be prioritizing development of new capabilities without addressing new security risks. This is even though 96% say adopting generative AI makes a security breach likely in their organization within the next three years.

While the survey takeaways suggest that business leaders fear they may lose a competitive edge or market lead by waiting for security to be baked into their AI-led business models, they are also concerned with increasing their risk exposure: nearly half of the study’s respondents voice concern about GenAI expanding their organizations’ attack surface. Specifically, 47% of those surveyed are concerned that adopting GenAI in operations will lead to new kinds of attacks targeting their applications, own AI models, data, or services.

It’s clear that when it comes to AI, we’ve crossed a new threshold: business leaders are eager to capitalize on the benefits promised by today’s innovations. In terms of security, they’re betting on new technologies to create more empowered, more productive teams. They’re looking for faster and more intuitive ways of working — whether detecting anomalies, managing risks, or responding to security incidents.

While many business leaders appear willing to accept the risk of insufficiently secured AI operations if it means they can evolve their business faster, security and technology leaders can take this as an opportunity to influence the conversation. It’s essential to understand that secure AI drives powerful AI outcomes, and that today we have the tools, processes, and strategies to help businesses establish secure AI business models as they embark on a dynamic journey of AI adoption.

Access “The CEO’s guide to generative AI: Cybersecurity” here.

Learn more about how IBM can help businesses accelerate their AI adoption securely here.

Learn more about how IBM is leveraging AI across its security portfolio here.

More from Artificial Intelligence

Could a threat actor socially engineer ChatGPT?

3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity analysts are still exploring their options. One primary goal is to understand how generative AI can help solve security problems while also looking out for ways threat actors can use the technology. There is some thought that AI, specifically large language models (LLMs), will be the equalizer that cybersecurity teams have been looking for: the learning curve is similar for analysts and threat actors, and because generative AI relies on the data…

AI vs. human deceit: Unravelling the new age of phishing tactics

7 min read - Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown. To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on…

Does your security program suffer from piecemeal detection and response?

4 min read - Piecemeal Detection and Response (PDR) can manifest in various ways. The most common symptoms of PDR include: Multiple security information and event management (SIEM) tools (e.g., one on-premise and one in the cloud) Spending too much time or energy on integrating detection systems An underperforming security orchestration, automation and response (SOAR) system Only capable of taking automated responses on the endpoint Anomaly detection in silos (e.g., network separate from identity) If any of these symptoms resonate with your organization, it's…

What to know about new generative AI tools for criminals

3 min read - Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions. The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged. Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today