I’ve often wondered whether artificial intelligence (AI) in cybersecurity is a good thing or a bad thing for data security. Yes, I love the convenience of online stores suggesting the perfect items for me based on my search history, but other times it feels a bit creepy to have a pair of shoes I looked at stalking me around the internet. But I’ve realized as a consumer that the answer is a little bit of both.

I’ve recently been researching and writing on the increase of AI-based attacks. In brief, the premise of AI is the quick analysis of large amounts of data and then using the data to make predictions. I learned that both the good guys and the bad guys can use AI. What does that mean for the future of data security?

Does AI Increase Data Security Threats?

Judging by how threat attackers use AI, the short answer is yes, AI increases threats. When cyber criminals rely on human intelligence, they mostly manually find the vulnerabilities, even if they are using tools. By using AI, they can automate the process of environment analysis and more quickly find its weaknesses. Essentially, AI makes attacks smarter and more accurate. Because AI depends on data to become smarter, threat actors can use data collected from previous attempts to predict vulnerabilities or spot changes in victims’ data security.

Three common attacks are evasion attacks, poisoning attacks and privacy attacks. During evasion attacks, the malicious content evades detection by changing codes at test time. While poisoning attacks focus on changing data sets, privacy attacks retrieve sensitive data. Each of these attacks is launched by using AI to spot openings and then conduct the attack faster than the victim can detect it.

One of the most challenging parts of AI attacks is this speed. Threat actors can spot new trends more quickly than defenders can develop tools and strategies to defend against the attacks. Threat actors can also use AI to analyze large amounts of data and then design social engineering attacks likely to work on specific people.

How AI Can Improve Vulnerability Management

It’s strange how many businesses and agencies don’t devote adequate resources to the vulnerability management aspect of data security. Effective cybersecurity starts by stopping criminals from gaining access. It’s not an easy task to do right, though. It can be expensive and complex. You also need teamwork between a lot of different people and roles.

However, this is why vulnerability management is the perfect task for AI. Instead of all of the manual tasks, AI automates the data analysis and systems reviews. That results in a much quicker process that makes it more likely to spot an opening before a cyber criminal does. The good guys can then fix the issue before an attack happens.

AI can also improve data security in several ways. Let’s start with the cornerstone of vulnerability management, knowing your level of cyber risk. The Vulnerability Risk Score vulnerability assessment is the key to knowing where you are in terms of defending your data and infrastructure. AI can more quickly and correctly analyze large amounts of data, meaning that you have an almost real-time picture of your vulnerability.

One of the big challenges with vulnerability management is keeping track of all of the different information sources. Potential attacks may be discussed on chat boards and private media. (This plagues cybersecurity in general.) By using AI to spot trends, defenders can know where to focus their often-limited resources for the biggest impact. Along the same lines, organizations should look to use AI to help triage security alerts to know which ones are both relevant and important.

Moving Data Security Forward With AI

AI is transforming vulnerability management from both sides — the attacker and the attacked. With threat actors increasingly employing AI for attacks, the best way that organizations can stay ahead of them is by using the same tech. Otherwise, they’re opening their data security efforts up to threats that have the speed and accuracy of AI on their side.

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today