For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.
However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing environments is actually moderately low. Still, projections from X-Force reveal that an increase in these sophisticated attack methods could be on the horizon.
Current status of the cloud computing market
The cloud computing market continues to grow exponentially, with experts expecting its value to reach more than $675 billion by the end of 2024. As more organizations expand their operational capabilities beyond on-premise restrictions and leverage public and private cloud infrastructure and services, adoption of AI technology is steadily increasing across multiple industry sectors.
Generative AI’s rapid integration into cloud computing platforms has created many opportunities for businesses, especially when enabling better automation and efficiency in the deployment, provisioning and scalability of IT services and SaaS applications.
However, as more businesses rely on new disruptive technologies to help them maximize the value of their cloud investments, the potential security danger that generative AI poses is something closely monitored by various cybersecurity organizations.
Read the Cloud Threat Landscape Report
Why are AI-generated attacks in the cloud currently considered lower risk?
Although AI-generated attacks are still among the top emerging risks for senior risk and assurance executives, according to a recent Gartner report, the current threat of AI technologies being exploited and leveraged in cloud infrastructure attacks is still moderately low, according to X-Force’s research.
This isn’t to say that AI technology isn’t still being regularly used in the development and distribution of highly sophisticated phishing schemes at scale. This behavior has already been observed with active malware distributors like Hive0137, who make use of large language models (LLMs) when scripting new dark web tools. Rather, the current lower risk projections are relevant to the likelihood of AI platforms being directly targeted in both cloud and on-premise environments.
One of the primary reasons for this lower risk has to do with the complex undertaking it will take for cyber criminals to breach and manipulate the underlying infrastructure of AI deployments successfully. Even if attackers put considerable resources into this effort, the still relatively low market saturation of cloud-based AI tools and solutions would likely lead to a low return on investment in time, resources and risks associated with carrying out these attacks.
Preparing for an inevitable increase in AI-driven cloud threats
While the immediate risks of AI-driven cloud threats may be lower today, this isn’t to say that organizations shouldn’t prepare for this to change in the near future.
IBM’s X-Force team has recognized correlations between the percentage of market share new technologies have across various markets and the trigger points related to their associated cybersecurity risks. According to the recent X-Force analysis, once generative AI matures and approaches 50% market saturation, it’s likely that its attack surface will become a larger target for cyber criminals.
For organizations currently utilizing AI technologies and proceeding with cloud adoption, designing more secure AI strategies is essential. This includes developing stronger identity security postures, integrating security throughout their cloud development processes and safeguarding the integrity of their data and quantum computation models.