I’ve often wondered whether artificial intelligence (AI) in cybersecurity is a good thing or a bad thing for data security. Yes, I love the convenience of online stores suggesting the perfect items for me based on my search history, but other times it feels a bit creepy to have a pair of shoes I looked at stalking me around the internet. But I’ve realized as a consumer that the answer is a little bit of both.

I’ve recently been researching and writing on the increase of AI-based attacks. In brief, the premise of AI is the quick analysis of large amounts of data and then using the data to make predictions. I learned that both the good guys and the bad guys can use AI. What does that mean for the future of data security?

Does AI Increase Data Security Threats?

Judging by how threat attackers use AI, the short answer is yes, AI increases threats. When cyber criminals rely on human intelligence, they mostly manually find the vulnerabilities, even if they are using tools. By using AI, they can automate the process of environment analysis and more quickly find its weaknesses. Essentially, AI makes attacks smarter and more accurate. Because AI depends on data to become smarter, threat actors can use data collected from previous attempts to predict vulnerabilities or spot changes in victims’ data security.

Three common attacks are evasion attacks, poisoning attacks and privacy attacks. During evasion attacks, the malicious content evades detection by changing codes at test time. While poisoning attacks focus on changing data sets, privacy attacks retrieve sensitive data. Each of these attacks is launched by using AI to spot openings and then conduct the attack faster than the victim can detect it.

One of the most challenging parts of AI attacks is this speed. Threat actors can spot new trends more quickly than defenders can develop tools and strategies to defend against the attacks. Threat actors can also use AI to analyze large amounts of data and then design social engineering attacks likely to work on specific people.

How AI Can Improve Vulnerability Management

It’s strange how many businesses and agencies don’t devote adequate resources to the vulnerability management aspect of data security. Effective cybersecurity starts by stopping criminals from gaining access. It’s not an easy task to do right, though. It can be expensive and complex. You also need teamwork between a lot of different people and roles.

However, this is why vulnerability management is the perfect task for AI. Instead of all of the manual tasks, AI automates the data analysis and systems reviews. That results in a much quicker process that makes it more likely to spot an opening before a cyber criminal does. The good guys can then fix the issue before an attack happens.

AI can also improve data security in several ways. Let’s start with the cornerstone of vulnerability management, knowing your level of cyber risk. The Vulnerability Risk Score vulnerability assessment is the key to knowing where you are in terms of defending your data and infrastructure. AI can more quickly and correctly analyze large amounts of data, meaning that you have an almost real-time picture of your vulnerability.

One of the big challenges with vulnerability management is keeping track of all of the different information sources. Potential attacks may be discussed on chat boards and private media. (This plagues cybersecurity in general.) By using AI to spot trends, defenders can know where to focus their often-limited resources for the biggest impact. Along the same lines, organizations should look to use AI to help triage security alerts to know which ones are both relevant and important.

Moving Data Security Forward With AI

AI is transforming vulnerability management from both sides — the attacker and the attacked. With threat actors increasingly employing AI for attacks, the best way that organizations can stay ahead of them is by using the same tech. Otherwise, they’re opening their data security efforts up to threats that have the speed and accuracy of AI on their side.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today