Security leaders need to cut through the hype when it comes to artificial intelligence (AI) security. While AI offers promise, buzzwords and big-picture thinking aren’t enough to deliver practical, useful results. Instead, using AI security correctly starts with knowing what it looks like today and what AI will look like tomorrow.

Improved curation, enhanced context and the growing field of stateful solutions are three trends that can help you better understand the AI of the future.

The State of AI Cybersecurity Today

The AI security market has undergone major growth, surpassing $8.6 billion in 2019. In the shorter term, Forbes reports that 76% of enterprises now “prioritize AI and machine learning (ML) over other IT initiatives in 2021.”

While current AI deployments focus largely on key tasks, such as incident reporting and analysis, the Institute of Electrical and Electronics Engineers notes that ongoing improvement of AI security techniques can increase threat detection rates, reduce false positives and improve behavioral analysis. But what does this look like in practice?

Curation: Distilling the Digital Impact of AI Security

First, take a look at curation: Intelligent tools can sort through millions of research papers, blogs, news stories and network events and then deliver relevant and real-time threat intelligence that helps people make data-driven decisions and improve front-line defensive posture.

In effect, curation acts to reduce the scaled-up problem of alert fatigue for IT teams. That problem now includes much more than simply perimeter security detection and application issue notification. By empowering AI to consume and then curate multiple sources, it’s possible for infosec experts to get a bird’s-eye view of what’s happening across security landscapes — and what steps are needed to improve overall protection.

AI and Cybersecurity: Context and Beyond

Context comes next. This speaks to the algorithmic infrastructure needed to go beyond the ‘what’ offered in curation tools and help people understand why specific events are occurring.

For enterprises, a contextual approach offers two key benefits: improved root cause response and reduced access complexity. Consider an attack on front-facing organizational apps. While existing tools can detect forbidden actions and close application sessions on their own, machine learning cybersecurity analysis makes it possible to pinpoint the nature and type of specific risks.

When it comes to user permissions, meanwhile, AI security tools can leverage context cues to approve or deny access. For example, if access requests are made from a new user location at an odd time of day, AI tools can deny entry and flag these events for further review. On the flip side, keeping tabs on users with familiar and repeating access patterns makes it possible for AI tools to approve specific sign-on requests without the need for more verification.

In addition to ease of access, this AI security approach also has knock-on revenue effects.

“The establishment of low-friction end user experiences has the potential to help boost security effectiveness while reducing management efforts and related costs,” says Steve Brasen, Research Director, Enterprise Management Associates.

Stateless Versus Stateful Applications

No matter how advanced AI becomes, humans remain a critical part of the cybersecurity loop. On the infosec side, humans will always be required for oversight and interpretation. Meanwhile, on the end-user side they introduce the risk of randomness. What people do and why they do it isn’t always obvious.

As a result, enterprises are often best served by a mix of stateless and stateful applications. Stateless applications have no stored knowledge and therefore no reference frame for past transactions. Stateful apps, meanwhile, leverage the context of previous actions to help assess user requests.

While stateless solutions offer a way to gate one-time transactions, such as 2FA access requests, stateful ones make it possible to understand the impact of people. They aggregate historical and contextual datasets to form a framework that helps better model and manage incidents driven by people.

The Human Side of AI Security

So what’s next for AI security? Recent survey data suggests worry among IT leaders that intelligent tools will replace their roles by 2030. Fully 41% believe they’ll be replaced by 2030. But, that outcome isn’t likely.

Here, trust is the tipping point. Experts agree that building trust across AI and security is critical for widespread adoption. Consider the ongoing challenge of bias, which occurs when systems include unconscious preference for specific actions or outcomes. This bias could lead to the under-representation or over-weighting of specific events, in turn exposing enterprises to risk.

The solution is twofold: better data to train AI on and expert human oversight.

“Machines get biased because the training and data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar.

For malicious actors, their goal may be to use biased AI to exploit key system or network resources.

Solving for this issue, therefore, demands human oversight. Just as human action (or inaction) can cause problems, expert oversight of AI-driven results can help ensure that tools are targeting the right incidents at the right time for the right reasons. Tools capable of curation, context and stateful solutions enhance this ability, helping to give human-lead infosec teams the edge over threat actors.

Bottom line? The future of AI security depends on curation tempered by critical context, informed by stateful analysis and watched over by human experts.

More from Artificial Intelligence

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Back to basics: Better security in the AI era

4 min read - The rise of artificial intelligence (AI), large language models (LLM) and IoT solutions has created a new security landscape. From generative AI tools that can be taught to create malicious code to the exploitation of connected devices as a way for attackers to move laterally across networks, enterprise IT teams find themselves constantly running to catch up. According to the Google Cloud Cybersecurity Forecast 2024 report, companies should anticipate a surge in attacks powered by generative AI tools and LLMs…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today