Security leaders need to cut through the hype when it comes to artificial intelligence (AI) security. While AI offers promise, buzzwords and big-picture thinking aren’t enough to deliver practical, useful results. Instead, using AI security correctly starts with knowing what it looks like today and what AI will look like tomorrow.

Improved curation, enhanced context and the growing field of stateful solutions are three trends that can help you better understand the AI of the future.

The State of AI Cybersecurity Today

The AI security market has undergone major growth, surpassing $8.6 billion in 2019. In the shorter term, Forbes reports that 76% of enterprises now “prioritize AI and machine learning (ML) over other IT initiatives in 2021.”

While current AI deployments focus largely on key tasks, such as incident reporting and analysis, the Institute of Electrical and Electronics Engineers notes that ongoing improvement of AI security techniques can increase threat detection rates, reduce false positives and improve behavioral analysis. But what does this look like in practice?

Curation: Distilling the Digital Impact of AI Security

First, take a look at curation: Intelligent tools can sort through millions of research papers, blogs, news stories and network events and then deliver relevant and real-time threat intelligence that helps people make data-driven decisions and improve front-line defensive posture.

In effect, curation acts to reduce the scaled-up problem of alert fatigue for IT teams. That problem now includes much more than simply perimeter security detection and application issue notification. By empowering AI to consume and then curate multiple sources, it’s possible for infosec experts to get a bird’s-eye view of what’s happening across security landscapes — and what steps are needed to improve overall protection.

AI and Cybersecurity: Context and Beyond

Context comes next. This speaks to the algorithmic infrastructure needed to go beyond the ‘what’ offered in curation tools and help people understand why specific events are occurring.

For enterprises, a contextual approach offers two key benefits: improved root cause response and reduced access complexity. Consider an attack on front-facing organizational apps. While existing tools can detect forbidden actions and close application sessions on their own, machine learning cybersecurity analysis makes it possible to pinpoint the nature and type of specific risks.

When it comes to user permissions, meanwhile, AI security tools can leverage context cues to approve or deny access. For example, if access requests are made from a new user location at an odd time of day, AI tools can deny entry and flag these events for further review. On the flip side, keeping tabs on users with familiar and repeating access patterns makes it possible for AI tools to approve specific sign-on requests without the need for more verification.

In addition to ease of access, this AI security approach also has knock-on revenue effects.

“The establishment of low-friction end user experiences has the potential to help boost security effectiveness while reducing management efforts and related costs,” says Steve Brasen, Research Director, Enterprise Management Associates.

Stateless Versus Stateful Applications

No matter how advanced AI becomes, humans remain a critical part of the cybersecurity loop. On the infosec side, humans will always be required for oversight and interpretation. Meanwhile, on the end-user side they introduce the risk of randomness. What people do and why they do it isn’t always obvious.

As a result, enterprises are often best served by a mix of stateless and stateful applications. Stateless applications have no stored knowledge and therefore no reference frame for past transactions. Stateful apps, meanwhile, leverage the context of previous actions to help assess user requests.

While stateless solutions offer a way to gate one-time transactions, such as 2FA access requests, stateful ones make it possible to understand the impact of people. They aggregate historical and contextual datasets to form a framework that helps better model and manage incidents driven by people.

The Human Side of AI Security

So what’s next for AI security? Recent survey data suggests worry among IT leaders that intelligent tools will replace their roles by 2030. Fully 41% believe they’ll be replaced by 2030. But, that outcome isn’t likely.

Here, trust is the tipping point. Experts agree that building trust across AI and security is critical for widespread adoption. Consider the ongoing challenge of bias, which occurs when systems include unconscious preference for specific actions or outcomes. This bias could lead to the under-representation or over-weighting of specific events, in turn exposing enterprises to risk.

The solution is twofold: better data to train AI on and expert human oversight.

“Machines get biased because the training and data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar.

For malicious actors, their goal may be to use biased AI to exploit key system or network resources.

Solving for this issue, therefore, demands human oversight. Just as human action (or inaction) can cause problems, expert oversight of AI-driven results can help ensure that tools are targeting the right incidents at the right time for the right reasons. Tools capable of curation, context and stateful solutions enhance this ability, helping to give human-lead infosec teams the edge over threat actors.

Bottom line? The future of AI security depends on curation tempered by critical context, informed by stateful analysis and watched over by human experts.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today