December 18, 2017 By Shane Schick 2 min read

While the rise of artificial intelligence (AI) has stoked fears of job loss in many industries, cybersecurity professionals have something new to worry about. A recent research study showed more than 91 percent of security experts are worried they’ll soon face AI cyberattacks.

Security firm Webroot conducted the survey, which gathered input from 400-plus experts in IT security across the U.S. and Japan who work in companies with more than 100 people. Of those, 87 percent said their firms already use AI to safeguard data, which may be why they believe cybercriminals will eventually figure out how to use the technology for their own purposes.

In a sense, there’s a pseudo-arms race to see who can more quickly employ tools such as machine learning and natural language processing effectively: CISOs and their teams, or cybercriminals. TechRepublic noted that an overwhelming majority of firms — 97 percent — will increase their spend on AI technologies between now and 2021. Only 1 percent said they didn’t see AI as strengthening their overall IT security posture, and three-quarters said it would soon be impossible to safeguard data unless they had the technology at their disposal.

As with any major wave in IT management, of course, it’s not just a matter of buying the technology, but choosing how and where you use it. As a story on BetaNews pointed out, those surveyed by Webroot are particularly interested in seeing where AI could improve the accuracy of their security analytics or offer early warnings of a cyberattack.

The potential for cybercriminals to launch AI cyberattacks, meanwhile, may only be limited by their imaginations. Digital Journal reported that those surveyed are concerned about the way AI could be used to develop more sophisticated malware, for instance. This is technology that’s designed to “learn” like human beings in some cases, so it could prove highly useful in creating near-foolproof phishing scams and social engineering techniques that dupe employees into handing over passwords or other forms of access. As smart as AI is, IT security pros may need to outsmart it once criminals get involved.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today