Many in the cybersecurity workforce can’t keep up with technological change and are too busy to learn about the latest threats. Some are even so burned out that they are leaving the industry entirely. These are some of the findings of a June 2019 study by Goldsmiths, the University of London and Symantec, the results of which should not only worry those who work in the cybersecurity space, but everyone who relies on a computer to do their work.

If there was ever a need to talk about integrating cybersecurity AI into your enterprise, it’s now.

Why the Security Burnout?

Last year, the Ponemon Institute released a report titled “Separating the Truths From the Myths in Cybersecurity.” Perhaps one of the most interesting lines from the report was that organizations are “suffering from investments in disjointed, non-integrated security products that increase cost and complexity.” This is a very telling line, because the people who ultimately have to work with the disjointed, nonintegrated security products are your security staff.

Imagine that you are on the front lines of cybersecurity now: Not only do you have to deal with a mountain of alerts and responses every day, but you also have to untangle these security products. Frustration and burnout are inevitable. It’s no wonder cybersecurity jobs are hard to fill and retaining those workers is equally difficult. And it’s no wonder Cybersecurity Ventures predicted that there will be 3.5 million unfilled cybersecurity jobs by 2021.

We are dealing with three inevitable situations:

  1. Malicious cyber activity continues to rise, whether it is from nation-states or criminal actors;
  2. Reliance on technology is increasing, not decreasing; and
  3. New technologies, such as 5G, mean more data is going to be produced and retained.

What does all that mean? It means that now, more than ever, we could surely use an assist. And oh yes, there is a fourth inevitability worth mentioning: the malicious use of artificial intelligence (AI). There’s just simply no way around it: Just like you shouldn’t take a handful of tissue paper to a Nerf ball fight, you shouldn’t be fighting your AI-equipped adversaries without some AI of your own. Your stressed out employees may be your best business case to implement cybersecurity AI.

Who, or What, Helps Share the Burden?

Now, there are those who are a bit reluctant to implement AI. And they do have legitimate questions, including how certain training methods are being used; what data the AI has been trained on; what data the AI will analyze; and who ultimately has control, the machine or the person?

These are all good questions to ask, and need to be asked, before an enterprise considers integrating AI into their security posture. Why? First, it helps you understand your business processes and forces you into a risk-based decision-making frame of mind. Next, and perhaps more importantly in this case, asking these questions helps you avoid any more of those disjointed, non-integrated products mentioned above, because otherwise, you just end up building more fragility into your system.

In fact, your desired state should be antifragility. Put another way, you become stronger from the attempts that try to break you. Enter cybersecurity AI.

Artificial Intelligence Is a Tool, Not a Crutch

With the cyberthreat landscape being what it is, really all we have here is a simple case of piling on. You see, if you use cybersecurity AI like a surgical tool, you begin to lighten the burden on your staff. AI can do things like munch away at mountains of data at incredibly high speeds. Therefore, at least in theory, the result of cybersecurity AI doing the heavy lifting should be:

  • Staff becoming more efficient in their productivity, since they no longer feel overwhelmed; and
  • Staff having increased ability to keep up with new threats and technologies.

But the key here is ensuring that your AI solution is in fact used as a tool and not as a crutch, something we all can be reasonably guilty of when integrating technology into both our professional and personal lives. To be most effective, cybersecurity AI needs to team up with the cybersecurity workforce, not replace it.

Your Cybersecurity AI Will Only Be as Good as the Data It’s Trained On

This section header somewhat speaks for itself, but also reinforces the need for the “human touch” when we begin to integrate more AI into our security practices. The AI will be fantastic for fast incident response, risk identification, prioritization, automation and scalability, but it is those who hold the cybersecurity jobs today that need to make sure the AI doesn’t go off course.

To be clear, this isn’t necessary because the AI will have some flawed algorithm — that’s an entirely different problem. Rather, it’s a matter of the cybersecurity AI having a guide to make sure it’s doing the right thing, because here’s the real kicker: Unless we decide to give full control to the machines (insert your favorite post-apocalyptic machine-run world movie here), we will be making decisions based on what the AI recommends.

This is the danger zone, because if we’re not careful, the AI shifts from being a tool to a crutch. And the more we lean on a crutch that may have some crack in it, the harder the fall will be when it snaps.

It’s a Team Game

Just as organizational leadership and security leadership need to team up to determine the best security solution for the organization, analysts need to team up with AI to determine how to manage all the alerts and responses. When you’re feeling bogged down and you have tight deadlines, such as privacy notifications or getting a system back online, managing all the data will just feel like a mountain on your shoulders. That alone may be the best business case to get yourself a surgical AI tool.

With a lot of next-gen technology around the corner, this may be the time to do a wholesale upgrade of your operations. Done correctly, the intended results should be:

  • A better understanding of your business processes;
  • Decisions made from a risk-based approach;
  • Happier staff, ready and able to be more productive; and
  • A next-gen solution that integrates AI and washes away those disjointed, nonintegrated products.

Your staff will no doubt appreciate the upgrade in offensive and defensive capacity. In fact, they may appreciate it so much they’ll see no reason to look for other cybersecurity jobs. With 3.5 million unfilled jobs on the horizon, holding on to these immensely qualified people may be critical for your enterprise.

More from Artificial Intelligence

Could a threat actor socially engineer ChatGPT?

3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity analysts are still exploring their options. One primary goal is to understand how generative AI can help solve security problems while also looking out for ways threat actors can use the technology. There is some thought that AI, specifically large language models (LLMs), will be the equalizer that cybersecurity teams have been looking for: the learning curve is similar for analysts and threat actors, and because generative AI relies on the data…

AI vs. human deceit: Unravelling the new age of phishing tactics

7 min read - Attackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown. To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on…

C-suite weighs in on generative AI and security

3 min read - Generative AI (GenAI) is poised to deliver significant benefits to enterprises and their ability to readily respond to and effectively defend against cyber threats. But AI that is not itself secured may introduce a whole new set of threats to businesses. Today IBM’s Institute for Business Value published “The CEO's guide to generative AI: Cybersecurity," part of a larger series providing guidance for senior leaders planning to adopt generative AI models and tools. The materials highlight key considerations for CEOs…

Does your security program suffer from piecemeal detection and response?

4 min read - Piecemeal Detection and Response (PDR) can manifest in various ways. The most common symptoms of PDR include: Multiple security information and event management (SIEM) tools (e.g., one on-premise and one in the cloud) Spending too much time or energy on integrating detection systems An underperforming security orchestration, automation and response (SOAR) system Only capable of taking automated responses on the endpoint Anomaly detection in silos (e.g., network separate from identity) If any of these symptoms resonate with your organization, it's…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today