I’ve often wondered whether artificial intelligence (AI) in cybersecurity is a good thing or a bad thing for data security. Yes, I love the convenience of online stores suggesting the perfect items for me based on my search history, but other times it feels a bit creepy to have a pair of shoes I looked at stalking me around the internet. But I’ve realized as a consumer that the answer is a little bit of both.

I’ve recently been researching and writing on the increase of AI-based attacks. In brief, the premise of AI is the quick analysis of large amounts of data and then using the data to make predictions. I learned that both the good guys and the bad guys can use AI. What does that mean for the future of data security?

Does AI Increase Data Security Threats?

Judging by how threat attackers use AI, the short answer is yes, AI increases threats. When cyber criminals rely on human intelligence, they mostly manually find the vulnerabilities, even if they are using tools. By using AI, they can automate the process of environment analysis and more quickly find its weaknesses. Essentially, AI makes attacks smarter and more accurate. Because AI depends on data to become smarter, threat actors can use data collected from previous attempts to predict vulnerabilities or spot changes in victims’ data security.

Three common attacks are evasion attacks, poisoning attacks and privacy attacks. During evasion attacks, the malicious content evades detection by changing codes at test time. While poisoning attacks focus on changing data sets, privacy attacks retrieve sensitive data. Each of these attacks is launched by using AI to spot openings and then conduct the attack faster than the victim can detect it.

One of the most challenging parts of AI attacks is this speed. Threat actors can spot new trends more quickly than defenders can develop tools and strategies to defend against the attacks. Threat actors can also use AI to analyze large amounts of data and then design social engineering attacks likely to work on specific people.

How AI Can Improve Vulnerability Management

It’s strange how many businesses and agencies don’t devote adequate resources to the vulnerability management aspect of data security. Effective cybersecurity starts by stopping criminals from gaining access. It’s not an easy task to do right, though. It can be expensive and complex. You also need teamwork between a lot of different people and roles.

However, this is why vulnerability management is the perfect task for AI. Instead of all of the manual tasks, AI automates the data analysis and systems reviews. That results in a much quicker process that makes it more likely to spot an opening before a cyber criminal does. The good guys can then fix the issue before an attack happens.

AI can also improve data security in several ways. Let’s start with the cornerstone of vulnerability management, knowing your level of cyber risk. The Vulnerability Risk Score vulnerability assessment is the key to knowing where you are in terms of defending your data and infrastructure. AI can more quickly and correctly analyze large amounts of data, meaning that you have an almost real-time picture of your vulnerability.

One of the big challenges with vulnerability management is keeping track of all of the different information sources. Potential attacks may be discussed on chat boards and private media. (This plagues cybersecurity in general.) By using AI to spot trends, defenders can know where to focus their often-limited resources for the biggest impact. Along the same lines, organizations should look to use AI to help triage security alerts to know which ones are both relevant and important.

Moving Data Security Forward With AI

AI is transforming vulnerability management from both sides — the attacker and the attacked. With threat actors increasingly employing AI for attacks, the best way that organizations can stay ahead of them is by using the same tech. Otherwise, they’re opening their data security efforts up to threats that have the speed and accuracy of AI on their side.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today