June 19, 2017 By Bob Stasio 3 min read

Think about an industry that has had a huge labor problem for decades. Hiring managers can’t find enough skilled people, and it takes way too long to train someone to an effective level. With all of the advances in technology, why hasn’t there been a complete automation of procedures?

The truth is that new technologies such as artificial intelligence (AI) and machine learning tend to increase the efficiency and precision of tasks. With humans able to accomplish more work in less time, they are free to explore other domains. This, in turn, leads to a branching of cybersecurity skills in different areas.

The Cybersecurity Skills Gap

The cybersecurity field faces the same growing skills gap. Trained human operators are needed for the most difficult tasks, and the advance of AI and machine learning will lead to increased effectiveness.

Shahid Shah, CEO of Netspective Communications, said that there are significant skills gaps in a variety of areas, including, but not limited to:

  • Asset collection;
  • Asset verification;
  • Audit;
  • Compliance;
  • Incident response and tracking;
  • Firewall/intrusion detection system (IDS) and/or intrusion prevention system (IPS) maintenance;
  • Security information and event management (SIEM);
  • Identity and access management (IAM);
  • Application security development;
  • Analytics and business intelligence; and
  • Advanced malware prevention.

Shah theorized that the only way to fill some of these gaps — especially in areas that take in large amounts of data and then synthesize it to find needles in haystacks — is with machine learning and AI.

How Soon Is Now?

Dan Lohrmann is the Chief Strategist & Chief Security Officer at Security Mentor, Inc. and he feels, in the short term, AI cannot truly fill the cybersecurity skills gap. But in the medium to long term, he does think it can help leading organizations fill open positions. Enterprises must develop the right security strategies now to gain the eventual AI and machine learning benefits down the road.

Shah summarized it well: There aren’t enough humans available to do proper analysis, synthesis or anomaly detection in cybersecurity. The only way to fill the skills gap is to program computers to do the grunt work and leave humans to the decision-making, incident management and follow-up.

Lohrmann adds, the trouble with our short-term situation is that we already have a cybersecurity skills emergency in many businesses and governments, and AI and machine learning are not making a big enough dent. Part of the reason is that the market adoption of these solutions is not yet integrated into the people, process and technology of most public- and private-sector organizations.

Human Expertise Remains Vital

Over time, and as more machine learning solutions are released and mature, AI will provide a bigger bang. Nevertheless, Lohrmann thinks we must remember that the well-funded bad guys will also have AI. We will never replace the need for top talent, so Al is just one piece of the puzzle.

Tyler Carbone, COO at Terbium Labs, said that machine learning is great at automating processes at which humans are already proficient. It’s more of a force multiplier, though, than a whole solution.

These technologies have potential when it comes to that first cut at a problem — reducing 500,000 alerts to 500, for example. But at the end of the day, Carbone said, we need a human in the loop for that last step. Humans are the ultimate exception handlers, and while better AI can help reduce the number of exceptions, those that remain will still require the attention of a specialist.

Training for the Future

This collaboration between humans and machines is what Scott Schober, president and CEO of Berkeley Varitronics, said is more powerful than the mere sum of its parts. He stated that by offloading skills to the worker best suited for the task, AI or human, both efficiency and output can be raised exponentially.

The advance of AI and machine learning will continue to improve cycles in the cybersecurity domain. However, we should not forget the critical training key personnel must continue to pursue to effectively leverage these capabilities to their greatest extent. It is essential for future cybersecurity workers to quickly learn these crucial skills for the industry’s future jobs.

Read the complete IBM Report on cybersecurity in the cognitive era

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today