June 19, 2017 By Bob Stasio 3 min read

Think about an industry that has had a huge labor problem for decades. Hiring managers can’t find enough skilled people, and it takes way too long to train someone to an effective level. With all of the advances in technology, why hasn’t there been a complete automation of procedures?

The truth is that new technologies such as artificial intelligence (AI) and machine learning tend to increase the efficiency and precision of tasks. With humans able to accomplish more work in less time, they are free to explore other domains. This, in turn, leads to a branching of cybersecurity skills in different areas.

The Cybersecurity Skills Gap

The cybersecurity field faces the same growing skills gap. Trained human operators are needed for the most difficult tasks, and the advance of AI and machine learning will lead to increased effectiveness.

Shahid Shah, CEO of Netspective Communications, said that there are significant skills gaps in a variety of areas, including, but not limited to:

  • Asset collection;
  • Asset verification;
  • Audit;
  • Compliance;
  • Incident response and tracking;
  • Firewall/intrusion detection system (IDS) and/or intrusion prevention system (IPS) maintenance;
  • Security information and event management (SIEM);
  • Identity and access management (IAM);
  • Application security development;
  • Analytics and business intelligence; and
  • Advanced malware prevention.

Shah theorized that the only way to fill some of these gaps — especially in areas that take in large amounts of data and then synthesize it to find needles in haystacks — is with machine learning and AI.

How Soon Is Now?

Dan Lohrmann is the Chief Strategist & Chief Security Officer at Security Mentor, Inc. and he feels, in the short term, AI cannot truly fill the cybersecurity skills gap. But in the medium to long term, he does think it can help leading organizations fill open positions. Enterprises must develop the right security strategies now to gain the eventual AI and machine learning benefits down the road.

Shah summarized it well: There aren’t enough humans available to do proper analysis, synthesis or anomaly detection in cybersecurity. The only way to fill the skills gap is to program computers to do the grunt work and leave humans to the decision-making, incident management and follow-up.

Lohrmann adds, the trouble with our short-term situation is that we already have a cybersecurity skills emergency in many businesses and governments, and AI and machine learning are not making a big enough dent. Part of the reason is that the market adoption of these solutions is not yet integrated into the people, process and technology of most public- and private-sector organizations.

Human Expertise Remains Vital

Over time, and as more machine learning solutions are released and mature, AI will provide a bigger bang. Nevertheless, Lohrmann thinks we must remember that the well-funded bad guys will also have AI. We will never replace the need for top talent, so Al is just one piece of the puzzle.

Tyler Carbone, COO at Terbium Labs, said that machine learning is great at automating processes at which humans are already proficient. It’s more of a force multiplier, though, than a whole solution.

These technologies have potential when it comes to that first cut at a problem — reducing 500,000 alerts to 500, for example. But at the end of the day, Carbone said, we need a human in the loop for that last step. Humans are the ultimate exception handlers, and while better AI can help reduce the number of exceptions, those that remain will still require the attention of a specialist.

Training for the Future

This collaboration between humans and machines is what Scott Schober, president and CEO of Berkeley Varitronics, said is more powerful than the mere sum of its parts. He stated that by offloading skills to the worker best suited for the task, AI or human, both efficiency and output can be raised exponentially.

The advance of AI and machine learning will continue to improve cycles in the cybersecurity domain. However, we should not forget the critical training key personnel must continue to pursue to effectively leverage these capabilities to their greatest extent. It is essential for future cybersecurity workers to quickly learn these crucial skills for the industry’s future jobs.

Read the complete IBM Report on cybersecurity in the cognitive era

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today