Artificial intelligence (AI) in cybersecurity was a popular topic at RSA’s virtual conference this year, with good reason. Many tools rely on AI, using it for incident response, detecting spam and phishing and threat hunting. However, while AI security gets the session titles, digging deeper, it is clear that machine learning (ML) is really what makes it work. The reason is simple. ML allows for “high-value predictions that can guide better decisions and smart actions in real-time without humans stepping in.”

Yet, for all ML can do to improve intelligence and help AI security do more, ML has its flaws. ML, and by default AI, is only as smart as people teach it to be. If the AI isn’t learning the right algorithms, it could end up making your defenses weaker. Also, threat actors have the same access to AI and ML tools as defenders do. We are starting to see how attackers use ML to launch attacks, as well as how it can serve as an attack vector. Take a look at the benefits and dangers the experts discussed at RSA.

What Machine Learning Cybersecurity Gets Right

When provided the right data set, ML is good at seeing the big picture of the digital landscape you’re trying to defend. That’s according to Jess Garcia, technical lead with One eSecurity, who presented the RSA session ‘Me, My Adversary & AI: Investigating and Hunting with Machine Learning.’

Among the areas ML is most useful for security purposes are prediction, noise filtering and anomaly detection. “A malicious event tends to be an anomaly,” Garcia says. Defenders can use ML designed to detect anomalies for threat detection and threat hunting.

The size of the dataset matters when programming ML for AI security. As Younghoo Lee, Senior Data Scientist with Sophos, pointed out in the session ‘AI vs AI: Creating Novel Spam and Catching it with Text Generating AI,’ more training data gives better results and pre-trained language models matter for downstream tasks. Lee’s panel focused on spam creation and protections, but the advice applies across ML systems used for cybersecurity.

When Attackers Use ML or AI Security

In the session ‘Evasion, Poisoning, Extraction, and Inference: The Tools to Defend and Evaluate,’ presenters Beat Buesser, research staff member with IBM Research, and Abigail Goldsteen, research staff member with IBM, shared four different adversarial threats against ML. Attackers can use:

  • Evasion: Modify an input to influence a model
  • Poisoning: Add a backdoor to training data
  • Extraction: Steal a proprietary model
  • Inference: Learn about private data

“We’re seeing an increasing number of these real-world threats,” says Buesser. Threat actors use techniques that distort what the ML knows, some of which have life or death fallout for the AI security. One example is attackers who put stickers on a highway, forcing a self-driving vehicle to swerve into oncoming traffic. Another example shows how attackers can modify at-risk ML systems to allow them to bypass security filtering systems to let more phishing emails get through.

Balancing the Pros and Cons

ML systems designed to augment AI security have become a benefit to security teams. More automation means less burnout and more accurate threat detection and repair. However, because threat actors see ML as an attack vector, the team should also know where ML and AI exist within the company or agency beyond their systems. Once familiar with the ML and AI functions, they can learn where potential problems may linger and see how those can become springboards for an attack.

ML and AI security have the potential to change detection and prevention models for the better. You also still need the human touch to ensure ML isn’t causing security problems instead of solving them.

More from Intelligence & Analytics

What makes a trailblazer? Inspired by John Mulaney’s Dreamforce roast

4 min read - When you bring a comedian to offer a keynote address, you need to expect the unexpected.But it is a good bet that no one in the crowd at Salesforce’s Dreamforce conference expected John Mulaney to tell a crowd of thousands of tech trailblazers that they were, in fact, not trailblazers at all.“The fact that there are 45,000 ‘trailblazers’ here couldn’t devalue the title anymore,” Mulaney told the audience.Maybe it was meant as nothing more than a punch line, but Mulaney’s…

New report shows ongoing gender pay gap in cybersecurity

3 min read - The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary.The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the field. In fact, only 17% of the 14,865 respondents to the survey were women.Pay gap between men and womenOne of the most concerning disparities revealed by…

Protecting your data and environment from unknown external risks

3 min read - Cybersecurity professionals always keep their eye out for trends and patterns to stay one step ahead of cyber criminals. The IBM X-Force does the same when working with customers. Over the past few years, clients have often asked the team about threats outside their internal environment, such as data leakage, brand impersonation, stolen credentials and phishing sites. To help customers overcome these often unknown and unexpected risks that are often outside of their control, the team created Cyber Exposure Insights…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today