Artificial intelligence (AI) in cybersecurity was a popular topic at RSA’s virtual conference this year, with good reason. Many tools rely on AI, using it for incident response, detecting spam and phishing and threat hunting. However, while AI security gets the session titles, digging deeper, it is clear that machine learning (ML) is really what makes it work. The reason is simple. ML allows for “high-value predictions that can guide better decisions and smart actions in real-time without humans stepping in.”

Yet, for all ML can do to improve intelligence and help AI security do more, ML has its flaws. ML, and by default AI, is only as smart as people teach it to be. If the AI isn’t learning the right algorithms, it could end up making your defenses weaker. Also, threat actors have the same access to AI and ML tools as defenders do. We are starting to see how attackers use ML to launch attacks, as well as how it can serve as an attack vector. Take a look at the benefits and dangers the experts discussed at RSA.

What Machine Learning Cybersecurity Gets Right

When provided the right data set, ML is good at seeing the big picture of the digital landscape you’re trying to defend. That’s according to Jess Garcia, technical lead with One eSecurity, who presented the RSA session ‘Me, My Adversary & AI: Investigating and Hunting with Machine Learning.’

Among the areas ML is most useful for security purposes are prediction, noise filtering and anomaly detection. “A malicious event tends to be an anomaly,” Garcia says. Defenders can use ML designed to detect anomalies for threat detection and threat hunting.

The size of the dataset matters when programming ML for AI security. As Younghoo Lee, Senior Data Scientist with Sophos, pointed out in the session ‘AI vs AI: Creating Novel Spam and Catching it with Text Generating AI,’ more training data gives better results and pre-trained language models matter for downstream tasks. Lee’s panel focused on spam creation and protections, but the advice applies across ML systems used for cybersecurity.

When Attackers Use ML or AI Security

In the session ‘Evasion, Poisoning, Extraction, and Inference: The Tools to Defend and Evaluate,’ presenters Beat Buesser, research staff member with IBM Research, and Abigail Goldsteen, research staff member with IBM, shared four different adversarial threats against ML. Attackers can use:

  • Evasion: Modify an input to influence a model
  • Poisoning: Add a backdoor to training data
  • Extraction: Steal a proprietary model
  • Inference: Learn about private data

“We’re seeing an increasing number of these real-world threats,” says Buesser. Threat actors use techniques that distort what the ML knows, some of which have life or death fallout for the AI security. One example is attackers who put stickers on a highway, forcing a self-driving vehicle to swerve into oncoming traffic. Another example shows how attackers can modify at-risk ML systems to allow them to bypass security filtering systems to let more phishing emails get through.

Balancing the Pros and Cons

ML systems designed to augment AI security have become a benefit to security teams. More automation means less burnout and more accurate threat detection and repair. However, because threat actors see ML as an attack vector, the team should also know where ML and AI exist within the company or agency beyond their systems. Once familiar with the ML and AI functions, they can learn where potential problems may linger and see how those can become springboards for an attack.

ML and AI security have the potential to change detection and prevention models for the better. You also still need the human touch to ensure ML isn’t causing security problems instead of solving them.

More from Intelligence & Analytics

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

Web injections are back on the rise: 40+ banks affected by new malware campaign

8 min read - Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information. In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript web injections. This new campaign is widespread and particularly evasive, with historical indicators of compromise (IOCs) suggesting a possible connection to DanaBot — although we…

Accelerating security outcomes with a cloud-native SIEM

5 min read - As organizations modernize their IT infrastructure and increase adoption of cloud services, security teams face new challenges in terms of staffing, budgets and technologies. To keep pace, security programs must evolve to secure modern IT environments against fast-evolving threats with constrained resources. This will require rethinking traditional security strategies and focusing investments on capabilities like cloud security, AI-powered defense and skills development. The path forward calls on security teams to be agile, innovative and strategic amidst the changes in technology…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today