Cyberattacks are on the rise as ransomware continues to plague companies across all industries and malicious actors look to nab bitcoin payouts and steal personal data. The first quarter of 2018 also saw a spike in both distributed denial-of-service (DDoS) attack volume and duration.

But despite the prevalence of these external threats, a February 2018 report found over one in four attacks start inside corporate networks. These insider threats can be devastating, especially if employees have privileged accounts. Plus, threats may go undetected for months if companies aren’t looking inward.

Enterprises need a new way to break bad behavior that takes the guesswork out of identifying accidental (or acrimonious) employee incidents. With that in mind, artificial intelligence (AI) may offer the next iteration of insider attack security.

Cyberattacks: Insider Threats by the Numbers

According to the report, the number of insider attacks varies significantly by sector. In manufacturing, just 13 percent of threats stem from insiders. In the public sector, 34 percent of all incidents start with authorized users. Health care tops the insider threats list with 56 percent of incidents tied to human error or intentional misuse.

In 17 percent of insider breaches, mistakes — rather than malice — were the underlying cause. Employees might send emails to the wrong recipient, improperly delete classified information or misconfigure privacy settings. While intention matters when it comes to discipline and long-term staffing decisions, it has no bearing on the impact of a data breach. Employees who mistakenly click on malicious links or open infected email attachments can subject organizations to the same types of IT disasters that stem from targeted outsider attacks.

The worst-case scenario when it comes to insider threats, according to ITWeb, is a hybrid attack that includes both internal and external actors. Described as a “toxic cocktail,” it’s incredibly difficult to detect and mitigate this type of incident.

IT Security: Need for Speed

The Department of Energy saw a 23 percent boost in cybersecurity spending in 2018, while the Nuclear Regulatory Commission received a 33 percent increase, according to GCN. But no matter how much money organizations invest in cybersecurity, humans remain the weak link in the chain. GCN suggests moving IT security “from human to machine speed” to both detect and resolve potential issues.

Insider threats also took center stage at the 2018 RSA Conference. Juniper Networks’ CEO, Rami Rahim, spoke about the “unfair advantage” criminals enjoy because of the internet since it eliminates the typical constraints of time, distance and identity.

So, it’s no surprise industry experts like Randy Trzeciak of the CERT Insider Threat Center see a role for AI in defending corporate networks against insider threats. Trzeciak noted in a 2018 RSA Conference interview with BankInfoSecurity that “insiders who defraud organizations exhibit consistent potential risk indicators.”

AI offers a way to detect these potential risk patterns more quickly without the inherent bias of human observers — which is critical given the nature of insider attacks. Since these attacks stem from authorized access, organizations may not realize they’ve been breached until the damage is done.

Teaching AI Technology

AI assisting security professionals makes sense in theory, but what does this look like in practice? According to VentureBeat, training is an essential part of the equation. For cybersecurity controls, this means teaching AI to recognize typical patterns of insider threat behavior effectively. These might include regular file transfers off corporate networks onto physical media or private email accounts — or strange account activity that doesn’t coincide with regular work shifts. Individually, these signs could be outliers. But when detected in concert by AI tools, they’re a cause for concern.

Also concerning is the double-edged nature of intelligence tools. As noted by Health IT Security, AI could be used to both bolster and undermine health data security. There’s also an emerging category of adversarial AI tools designed to automatically infiltrate networks and custom-design attack vectors that can compromise security.

The philosophy of AI development also matters. As shown by recent experiments that released AI-enabled bots into the world of social media, artificial intelligence tools can learn the wrong lessons just as easily as the right ones.

What does this mean for AI as insider defense?

Applied Learning

Insider threats are now a top priority for organizations. Despite good intentions, employees may unwittingly expose critical systems to malware, ransomware or other emerging threats. Given the sheer number of mobile- and cloud-based endpoints, it’s impossible for human security experts to keep pace with both internal and external threats, especially when inside actors may go undetected.

AI offers a way to detect common patterns of compromise and network abuse, restrict access as applicable and report actions taken to IT professionals. The next step toward breaking bad behavior is to implement AI and train it to recognize key patterns, disregard signal noise and accelerate security from human to machine speed.

Learn more about adversarial AI and the IBM Adversarial Robustness Toolbox (ART)

More from Artificial Intelligence

Machine Learning Applications in the Cybersecurity Space

3 min read - Machine learning is one of the hottest areas in data science. This subset of artificial intelligence allows a system to learn from data and make accurate predictions, identify anomalies or make recommendations using different techniques. Machine learning techniques extract information from vast amounts of data and transform it into valuable business knowledge. While most industries use these techniques, they are especially prominent in the finance, marketing, healthcare, retail and cybersecurity sectors. Machine learning can also address new cyber threats. There…

3 min read

Now Social Engineering Attackers Have AI. Do You? 

4 min read - Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks? When Every Social Engineering Attack Uses Perfect English ChatGPT is a public tool based on a…

4 min read

Can Large Language Models Boost Your Security Posture?

4 min read - The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting. In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats. Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with…

4 min read

Why Robot Vacuums Have Cameras (and What to Know About Them)

4 min read - Robot vacuum cleaner products are by far the largest category of consumer robots. They roll around on floors, hoovering up dust and dirt so we don’t have to, all while avoiding obstacles. The industry leader, iRobot, has been cleaning up the robot vacuum market for two decades. Over this time, the company has steadily gained fans and a sterling reputation, including around security and privacy. And then, something shocking happened. Someone posted on Facebook a picture of a woman sitting…

4 min read