Cybersecurity concerns in 2024 can be summed up in two letters: AI (or five letters if you narrow it down to gen AI). Organizations are still in the early stages of understanding the risks and rewards of this technology. For all the good it can do to improve data protection, keep up with compliance regulations and enable faster threat detection, threat actors are also using AI to accelerate their social engineering attacks and sabotage AI models with malware.
AI might have gotten the lion’s share of attention in 2024, but it wasn’t the only cyber threat organizations had to deal with. Credential theft continues to be problematic, with a 71% year-over-year increase in attacks using compromised credentials. The skills shortage continues, costing companies an additional $1.76 million in a data breach aftermath. And as more companies rely on the cloud, it shouldn’t be surprising that there has been a spike in cloud intrusions.
But there have been positive steps in cybersecurity over the past year. CISA’s Secure by Design program signed on more than 250 software manufacturers to improve their cybersecurity hygiene. CISA also introduced its Cyber Incident Reporting Portal to improve the way organizations share cyber information.
Last year’s cybersecurity predictions focused heavily on AI and its impact on how security teams will operate in the future. This year’s predictions also emphasize AI, showing that cybersecurity may have reached a point where security and AI are interdependent on each other, for both good and bad.
Here are this year’s predictions.
Shadow AI is everywhere (Akiba Saeedi, Vice President, IBM Security Product Management)
Shadow AI will prove to be more common — and risky — than we thought. Businesses have more and more generative AI models deployed across their systems each day, sometimes without their knowledge. In 2025, enterprises will truly see the scope of “shadow AI” – unsanctioned AI models used by staff that aren’t properly governed. Shadow AI presents a major risk to data security, and businesses that successfully confront this issue in 2025 will use a mix of clear governance policies, comprehensive workforce training and diligent detection and response.
Identity’s transformation (Wes Gyure, Executive Director, IBM Security Product Management)
How enterprises think about identity will continue to transform in the wake of hybrid cloud and app modernization initiatives. Recognizing that identity has become the new security perimeter, enterprises will continue their shift to an Identity-First strategy, managing and securing access to applications and critical data, including gen AI models. In 2025, a fundamental component of this strategy is to build an effective identity fabric, a product-agnostic integrated set of identity tools and services. When done right, this will be a welcome relief to security professionals, taming the chaos and risk caused by a proliferation of multicloud environments and scattered identity solutions.
Explore cybersecurity services
Everyone must work together to manage threats (Sam Hector, Global Strategy Leader, IBM Security)
Cybersecurity teams will no longer be able to effectively manage threats in isolation. Threats from generative AI and hybrid cloud adoption are rapidly evolving. Meanwhile, the risk quantum computing poses to modern standards of public-key encryption will become unavoidable. Given the maturation of new quantum-safe cryptography standards, there will be a drive to discover encrypted assets and accelerate the modernization of cryptography management. Next year, successful organizations will be those where executives and diverse teams jointly develop and enforce cybersecurity strategies, embedding security into the organizational culture.
Prepare for post-quantum cryptography standards (Ray Harishankar, IBM Fellow, IBM Quantum Safe)
As organizations begin the transition to post-quantum cryptography over the next year, agility will be crucial to ensure systems are prepared for continued transformation, particularly as the U.S. National Institute of Standards and Technology (NIST) continues to expand its toolbox of post-quantum cryptography standards. NIST’s initial post-quantum cryptography standards were a signal to the world that the time is now to start the journey to becoming quantum-safe. But equally important is the need for crypto agility, ensuring that systems can rapidly adapt to new cryptographic mechanisms and algorithms in response to changing threats, technological advances and vulnerabilities. Ideally, automation will streamline and accelerate the process.
Data will become a vital part of AI security (Suja Viswesan, vice president of Security Software Development, IBM)
Data and AI security will become an essential ingredient of trustworthy AI. “Trustworthy AI” is often interpreted as AI that is transparent, fair and privacy-protecting. These are critical characteristics. But if AI and the data powering it aren’t also secure, then all other characteristics are compromised. In 2025, as businesses, governments and individuals interact with AI more often and with higher stakes, data and AI security will be viewed as an even more important part of the trustworthy AI recipe.
Organizations will continue learning the juxtaposition of AI’s benefits and threats (Mark Hughes, Global Managing Partner, Cybersecurity Services, IBM)
As AI matures from proof-of-concept to wide-scale deployment, enterprises reap the benefits of productivity and efficiency gains, including automating security and compliance tasks to protect their data and assets. But organizations need to be aware of AI being used as a new tool or conduit for threat actors to breach long-standing security processes and protocols. Businesses need to adopt security frameworks, best practice recommendations and guardrails for AI and adapt quickly — to address both the benefits and risks associated with rapid AI advancements.
Greater understanding of AI-assisted versus AI-powered threats (Troy Bettencourt, Global Partner and Head of IBM X-Force)
Protect against AI-assisted threats; plan for AI-powered threats. There is a distinction between AI-powered and AI-assisted threats, including how organizations should think about their proactive security posture. AI-powered attacks, like deepfake video scams, have been limited to date; today’s threats remain primarily AI-assisted — meaning AI can help threat actors create variants of existing malware or a better phishing email lure. To address current AI-assisted threats, organizations should prioritize implementing end-to-end security for their own AI solutions, including protecting user interfaces, APIs, language models and machine learning operations, while remaining mindful of strategies to defend against future AI-powered attacks.
There’s a very clear message from these predictions that understanding how AI can help and hurt an organization is vital to ensuring your company and its assets are protected in 2025 and beyond.