IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.

Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.

Here are three ways how AI is helping to make that possible:

1. Attack surface management: Proactive defense with AI

Increased complexity and interconnectedness are a growing headache for security teams, and attack surfaces are expanding far beyond what they can monitor using manual means alone. As organizations level up their multi-cloud strategies and onboard new SaaS tools and third-party code in software development and deployment, the challenge only intensifies.

With these larger attack surfaces come increased complexity of network interactions and many new potential entry points for adversaries to exploit. Attack surface management (ASM) brings AI-powered, real-time protection to digital infrastructures, regardless of underlying complexity.

Automated ASM greatly augments manual auditing by providing comprehensive visibility into attack surfaces. Furthermore, AI learns from the data it monitors to Improve future detection outcomes, albeit at a speed and scale that humans alone can’t match.

However, while ASM tools are often presented as turnkey solutions and are usually relatively easy to deploy, the ability of security teams to interpret the huge influx of data they generate is essential for maximizing their impact.

Read the 2024 Cost of a Data Breach report

2. Red teaming: AI goes on the offensive

AI red teaming is the process of having people stress-test AI models for potential vulnerabilities and other issues, such as bias and misinformation. While most models are designed with guardrails in place to mitigate these risks, attackers routinely try to “jailbreak” them through the use of clever prompting. For red teams, the goal is to get there before their adversaries, thereby giving them a chance to take corrective action.

Red teams can themselves use AI to help identify potential issues in the data used to train AI models. For instance, according to IBM’s report, over a third of data breaches involve shadow data. If that data, unvetted and unmonitored for quality and integrity, ends up being used in model training, the ripple effects can be significant. AI can help red teams detect shadow data by identifying anomalies and overlooked data sources that could pose security risks. Red teams can also test AI models against one another using adversarial machine learning methods to identify vulnerabilities.

Unlike ASM, red teaming involves tailored simulations specific to the organization’s data and threat landscape. To fully realize its benefits, organizations must work with skilled teams that can correctly interpret and analyze the results and implement the required changes.

3. Posture management: Continuous security at scale

Posture management is where the scalable, real-time monitoring capabilities of AI really shine. Where ASM reveals potential vulnerabilities in attack surfaces, posture management takes a much broader approach by monitoring configurations, compliance with security policies and connections between both internal and external systems in a manner that’s continuous, agile and adaptable.

By automating posture management with AI, security teams can mitigate risks in far less time and scale their efforts across complex multi-cloud infrastructures to ensure consistency across the board. Also, given the reduced reliance on manual processes, the chances of human error are greatly reduced.

Even when breaches do occur, organizations that extensively incorporate AI and automation into their posture management strategies can identify and mitigate them nearly 100 days faster than those that don’t use AI at all. Naturally, the time saved in both prevention and remediation results in substantial direct and indirect cost savings, too.

AI is a game changer, but the human element is as important as ever

The AI opportunity in cybersecurity is undeniable. Not only can it help scale strategies across increasingly complex environments, but it can also help democratize security by allowing less experienced analysts to interact with security systems using natural language queries.

However, that’s not to suggest that AI is a replacement for human expertise. Rather, it must complement it.

AI and automation in security have helped organizations save millions in potential damages and remediation efforts, but they still need people to understand the data and insights that AI provides to maximize its potential.

That’s why managed security services have an increasingly important role to play in ensuring that AI adoption is strategically aligned with business needs and goals — instead of being deployed solely for reducing costs and labor.

More from Artificial Intelligence

AI hallucinations can pose a risk to your cybersecurity

4 min read - In early 2023, Google’s Bard made headlines for a pretty big mistake, which we now call an AI hallucination. During a demo, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard answered that JWST, which launched in December 2021, took the "very first pictures" of an exoplanet outside our solar system. However, the European Southern Observatory's Very Large Telescope took the first picture of an exoplanet in 2004.What is…

Best practices on securing your AI deployment

4 min read - As organizations embrace generative AI, there are a host of benefits that they are expecting from these projects—from efficiency and productivity gains to improved speed of business to more innovation in products and services. However, one factor that forms a critical part of this AI innovation is trust. Trustworthy AI relies on understanding how the AI works and how it makes decisions.According to a survey of C-suite executives from the IBM Institute for Business Value, 82% of respondents say secure and…

Navigating the ethics of AI in cybersecurity

4 min read - Even if we’re not always consciously aware of it, artificial intelligence is now all around us. We’re already used to personalized recommendation systems in e-commerce, customer service chatbots powered by conversational AI and a whole lot more. In the realm of information security, we’ve already been relying on AI-powered spam filters for years to protect us from malicious emails.Those are all well-established use cases. However, since the meteoric rise of generative AI in the last few years, machines have become…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today