Security teams are dealing with a stream of warnings about failed login attempts, possible phishing emails and potential malware threats, among other challenges. There are concerns over authorized use — who has permission to do what, when they will access it and why — and issues around private data generated by staff and consumers. Around 26 percent of these alerts are false positives, according to Neustar. Some require no action, others are an easy fix and a small percentage actually require IT intervention.

The result is hardly surprising: alert fatigue. Overworked, understaffed teams do their best to manage the infosec environment, but they’re often hampered by the sheer volume of alerts, reports and warnings generated by disparate security systems across the organization.

Augment Your Security Teams With Cybersecurity AI

Implementing cybersecurity AI can offer an alternative. First a hype and then a hope, this security strategy is now gaining ground as a viable way to shift the burden of alert management away from security pros and onto the digital shoulders of artificially intelligent alternatives.

But what happens after cybersecurity tools have made their big debut and staff can get out from under the avalanche of infosec alerts? Here’s a look at six things security professionals can do once cybersecurity AI takes on some of the heavy lifting.

1. Deep Clean the Network

No network is invincible, and recent research from TechRadar found that virtually all mobile apps are susceptible to malware. The combination of expanding cloud services and growing application environments means the first thing IT pros should do after AI makes things easier is deep clean the network.

First up? Sweep for cloud sprawl. Locate instances or applications that aren’t in use or may pose a potential security risk and shut them down. Work with staff to identify apps they’re using that may not be approved by IT, and then look for ways to secure them or provide approved alternatives.

Next, look for common weaknesses and unpatched vulnerabilities. It is, of course, better to find potential problems ASAP instead of after the fact.

Last but not least, prioritize penetration testing for your entire network. Find a reputable, reliable and robust third party and bring them on board to analyze and evaluate your entire IT environment. Let’s be honest: Deep cleaning the network isn’t a task anyone enjoys, but it’s the first step toward better security strategy once AI is up and running.

2. Deploy New Staff

It may seem counterintuitive to search for new staff after AI ramps up infosec efficacy. Isn’t part of the point of machine learning and neural networks to solve the staffing crisis?

Not quite. As Tech Crunch points out, the relative infancy of AI initiatives means that increased staff oversight is required to ensure these tools are avoiding alert bias and performing optimally. Even as AI tools become more embedded across infosec systems, teams need to recruit new experts or upskill existing staff to help meet the growing management needs of potentially error-prone AI.

3. Dig Into Analysis

With AI shouldering the burden of everyday alerts, as well as responding automatically to minor infosec issues and identifying false positives, there’s finally room for IT staff to dig deeper into analysis.

Forbes has noted, for example, that AI tools are now being used to analyze application risks and predictively model their long-term threat potential. Using this data as a jumping-off point, staff can focus on identifying trends at scale and being proactive in combating security shortfalls.

4. Break Down Security Silos

Silos remain a serious problem for infosec teams. While the uptake of cloud services has democratized IT access, it has also increased overall complexity.

Consider the rapid adoption of multicloud environments: Recent survey data from Flexera shows that, on average, organizations now deploy around five different clouds to manage IT effectively. That means five different approaches to data management, information security and asset mobility across enterprises, and where these approaches encounter friction at the edge of department silos, security risk increases.

The deployment of cybersecurity AI makes it possible for infosec pros to focus on streamlining cloud solutions by identifying and deploying multicloud management solutions that improve visibility, security and automation at scale.

5. Draft Solid Policy

With AI handling the basics, it’s time to take a step back and draft infosec policy that meets current demands and helps companies future-proof protective processes. While AI tools can help detect, identify and mitigate attacks, the onus remains on IT pros to enact and enforce cybersecurity policies.

As a result, it’s critical for security teams to work with C-suites to develop policy that permits critical functions without increasing overall risk. Policy priorities should cover network use, access requirements, mobile deployments, application installation, and data storage and transfer. They should also highlight specific remedies in case of a policy breach. In other words, it’s critical for IT staff to communicate with employees about what’s permitted, what’s expected and what potential consequences may result if policy isn’t followed properly.

There is also an increasing need to develop solid policy around the use and deployment of AI itself. As noted by Raffael Marty, VP of research and intelligence at Forcepoint, the use of AI algorithms remains an area of rapid evolution and potential risk. Infosec pros must develop policy that accounts for both the efficacy of emerging algorithms and the potential errors that may stem from them.

6. Develop C-Suite Confidence

Despite the increasing ability of AI tools, trust remains low. In fact, 60 percent of security professionals trust human-verified infosec findings over those of AI, according to survey results reported by Security Magazine. Respondents indicated that human intuition, creativity and previous experience outweighed the predictive processes of AI. Security staff have a new role to play in the emerging infosec environment as both fact-checkers and champions of AI solutions.

By creating data workflows that empower human analysis and oversight, it’s possible for security teams to bolster C-suite confidence and pave the way for ongoing AI adoption.

AI Can’t Improve Security on Its Own

The AI revolution has arrived. Artificially intelligent solutions are now critical to developing and deploying holistic security strategies, but AI doesn’t improve infosec on its own. As Dark Reading notes, human expertise will always play a critical role in the security operations center (SOC). In fact, security pros have even more to do once AI takes on the more menial, recurring tasks, from deep cleaning networks and deploying new staff to drafting security policy and developing C-suite confidence. Even with AI help, there’s always room for security improvement.

More from Artificial Intelligence

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Back to basics: Better security in the AI era

4 min read - The rise of artificial intelligence (AI), large language models (LLM) and IoT solutions has created a new security landscape. From generative AI tools that can be taught to create malicious code to the exploitation of connected devices as a way for attackers to move laterally across networks, enterprise IT teams find themselves constantly running to catch up. According to the Google Cloud Cybersecurity Forecast 2024 report, companies should anticipate a surge in attacks powered by generative AI tools and LLMs…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today