For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.

However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing environments is actually moderately low. Still, projections from X-Force reveal that an increase in these sophisticated attack methods could be on the horizon.

Current status of the cloud computing market

The cloud computing market continues to grow exponentially, with experts expecting its value to reach more than $675 billion by the end of 2024. As more organizations expand their operational capabilities beyond on-premise restrictions and leverage public and private cloud infrastructure and services, adoption of AI technology is steadily increasing across multiple industry sectors.

Generative AI’s rapid integration into cloud computing platforms has created many opportunities for businesses, especially when enabling better automation and efficiency in the deployment, provisioning and scalability of IT services and SaaS applications.

However, as more businesses rely on new disruptive technologies to help them maximize the value of their cloud investments, the potential security danger that generative AI poses is something closely monitored by various cybersecurity organizations.

Read the Cloud Threat Landscape Report

Why are AI-generated attacks in the cloud currently considered lower risk?

Although AI-generated attacks are still among the top emerging risks for senior risk and assurance executives, according to a recent Gartner report, the current threat of AI technologies being exploited and leveraged in cloud infrastructure attacks is still moderately low, according to X-Force’s research.

This isn’t to say that AI technology isn’t still being regularly used in the development and distribution of highly sophisticated phishing schemes at scale. This behavior has already been observed with active malware distributors like Hive0137, who make use of large language models (LLMs) when scripting new dark web tools. Rather, the current lower risk projections are relevant to the likelihood of AI platforms being directly targeted in both cloud and on-premise environments.

One of the primary reasons for this lower risk has to do with the complex undertaking it will take for cyber criminals to breach and manipulate the underlying infrastructure of AI deployments successfully. Even if attackers put considerable resources into this effort, the still relatively low market saturation of cloud-based AI tools and solutions would likely lead to a low return on investment in time, resources and risks associated with carrying out these attacks.

Preparing for an inevitable increase in AI-driven cloud threats

While the immediate risks of AI-driven cloud threats may be lower today, this isn’t to say that organizations shouldn’t prepare for this to change in the near future.

IBM’s X-Force team has recognized correlations between the percentage of market share new technologies have across various markets and the trigger points related to their associated cybersecurity risks. According to the recent X-Force analysis, once generative AI matures and approaches 50% market saturation, it’s likely that its attack surface will become a larger target for cyber criminals.

For organizations currently utilizing AI technologies and proceeding with cloud adoption, designing more secure AI strategies is essential. This includes developing stronger identity security postures, integrating security throughout their cloud development processes and safeguarding the integrity of their data and quantum computation models.

More from Artificial Intelligence

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today