Breaches of organizations that employ automation and AI in their security systems on average cost over $3 million less compared to businesses that have no such deployment. This takeaway comes from the latest annual Cost of a Data Breach report sponsored, analyzed and published by IBM Security™ using research conducted by the Ponemon Institute.

The Benefits of Automation Keep Growing

In 2021, the difference in cost between organizations with fully deployed automated security structures compared to those with no security automation in the report is a gap of USD 3.81 million.

The cost differences continue the trend indicated by previous Cost of a Data Breach reports of an increasingly wider gap between those with and without security automation shown in previous years. In 2020, organizations with a fully automated security structure paid USD 3.58 million less than those with no automation in place. The gap was USD 2.51 million in 2019.

Download the Report

At the same time, the share of businesses that have at least partially deployed security automation or AI increased six points from 2020 to 2021, from 59 percent to 65 percent. Respondents reporting fully deployed automation for the organizations in the same period went from 21 percent to 25 percent, while those respondents claiming partially deployed automation grew from 38 percent to 40 percent.

Automation and AI dramatically reduce the days needed to identify and contain a data breach. For organizations with fully deployed security AI or automation, it took an average of 184 days to identify the breach and 63 days to contain the breach, for a total lifecycle of 247 days. Organizations with no security AI or automation deployed took an average of 239 days to identify the breach and 85 days to contain, for a total lifecycle of 324 days.

To put this difference of 77 days into perspective, for fully deployed organizations, a breach occurring on 1 January would on average take until 4 September to identify and contain. In contrast, for organizations with no automation deployed, a breach on 1 January would take on average until 20 November to identify and contain.

The time it takes to identify and contain a breach has consistently been correlated with the overall cost of a breach. The longer threat actors are in an environment, the more opportunities they have to cause damage to systems and the broader the infection can become. Simply put, when it comes to breaches, time is money. Automation helps significantly reduce this time to find and repair any issues earlier and reduce costs associated with a data breach.

Automation and AI Benefits Extend Beyond Breaches

Automation and AI also act as force multipliers for an organization, increasing the effectiveness of the existing workforce while assuming responsibilities for mundane tasks. Beside saving costs, this gives security officers at a business more time and resources to focus on data breaches.

Additionally, some industries have extra regulatory requirements around their data, such as healthcare and finance. Automation and AI can parse different streams of data from different sources to maintain regulatory compliance. The healthcare and finance industries in particular have invested more resources in automation and AI for cybersecurity in recent years.

The process of incorporating security automation and AI can be challenging and complex to implement if there are no experts in the field within an organization. Fortunately, IBM Security offers external expertise to assist in fulfilling this need.

Take Time to Discover More

This blog is the third in a series covering security measures organization leaders can take to address data breaches, following zero trust and risk quantification. The next installment addresses a final element to consider, cloud security. For an overview of all these issues, read the report to learn more about what options exist to address a data breach.

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today