Security leaders need to cut through the hype when it comes to artificial intelligence (AI) security. While AI offers promise, buzzwords and big-picture thinking aren’t enough to deliver practical, useful results. Instead, using AI security correctly starts with knowing what it looks like today and what AI will look like tomorrow.

Improved curation, enhanced context and the growing field of stateful solutions are three trends that can help you better understand the AI of the future.

The State of AI Cybersecurity Today

The AI security market has undergone major growth, surpassing $8.6 billion in 2019. In the shorter term, Forbes reports that 76% of enterprises now “prioritize AI and machine learning (ML) over other IT initiatives in 2021.”

While current AI deployments focus largely on key tasks, such as incident reporting and analysis, the Institute of Electrical and Electronics Engineers notes that ongoing improvement of AI security techniques can increase threat detection rates, reduce false positives and improve behavioral analysis. But what does this look like in practice?

Curation: Distilling the Digital Impact of AI Security

First, take a look at curation: Intelligent tools can sort through millions of research papers, blogs, news stories and network events and then deliver relevant and real-time threat intelligence that helps people make data-driven decisions and improve front-line defensive posture.

In effect, curation acts to reduce the scaled-up problem of alert fatigue for IT teams. That problem now includes much more than simply perimeter security detection and application issue notification. By empowering AI to consume and then curate multiple sources, it’s possible for infosec experts to get a bird’s-eye view of what’s happening across security landscapes — and what steps are needed to improve overall protection.

AI and Cybersecurity: Context and Beyond

Context comes next. This speaks to the algorithmic infrastructure needed to go beyond the ‘what’ offered in curation tools and help people understand why specific events are occurring.

For enterprises, a contextual approach offers two key benefits: improved root cause response and reduced access complexity. Consider an attack on front-facing organizational apps. While existing tools can detect forbidden actions and close application sessions on their own, machine learning cybersecurity analysis makes it possible to pinpoint the nature and type of specific risks.

When it comes to user permissions, meanwhile, AI security tools can leverage context cues to approve or deny access. For example, if access requests are made from a new user location at an odd time of day, AI tools can deny entry and flag these events for further review. On the flip side, keeping tabs on users with familiar and repeating access patterns makes it possible for AI tools to approve specific sign-on requests without the need for more verification.

In addition to ease of access, this AI security approach also has knock-on revenue effects.

“The establishment of low-friction end user experiences has the potential to help boost security effectiveness while reducing management efforts and related costs,” says Steve Brasen, Research Director, Enterprise Management Associates.

Stateless Versus Stateful Applications

No matter how advanced AI becomes, humans remain a critical part of the cybersecurity loop. On the infosec side, humans will always be required for oversight and interpretation. Meanwhile, on the end-user side they introduce the risk of randomness. What people do and why they do it isn’t always obvious.

As a result, enterprises are often best served by a mix of stateless and stateful applications. Stateless applications have no stored knowledge and therefore no reference frame for past transactions. Stateful apps, meanwhile, leverage the context of previous actions to help assess user requests.

While stateless solutions offer a way to gate one-time transactions, such as 2FA access requests, stateful ones make it possible to understand the impact of people. They aggregate historical and contextual datasets to form a framework that helps better model and manage incidents driven by people.

The Human Side of AI Security

So what’s next for AI security? Recent survey data suggests worry among IT leaders that intelligent tools will replace their roles by 2030. Fully 41% believe they’ll be replaced by 2030. But, that outcome isn’t likely.

Here, trust is the tipping point. Experts agree that building trust across AI and security is critical for widespread adoption. Consider the ongoing challenge of bias, which occurs when systems include unconscious preference for specific actions or outcomes. This bias could lead to the under-representation or over-weighting of specific events, in turn exposing enterprises to risk.

The solution is twofold: better data to train AI on and expert human oversight.

“Machines get biased because the training and data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar.

For malicious actors, their goal may be to use biased AI to exploit key system or network resources.

Solving for this issue, therefore, demands human oversight. Just as human action (or inaction) can cause problems, expert oversight of AI-driven results can help ensure that tools are targeting the right incidents at the right time for the right reasons. Tools capable of curation, context and stateful solutions enhance this ability, helping to give human-lead infosec teams the edge over threat actors.

Bottom line? The future of AI security depends on curation tempered by critical context, informed by stateful analysis and watched over by human experts.

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today