Cyberattacks are on the rise as ransomware continues to plague companies across all industries and malicious actors look to nab bitcoin payouts and steal personal data. The first quarter of 2018 also saw a spike in both distributed denial-of-service (DDoS) attack volume and duration.

But despite the prevalence of these external threats, a February 2018 report found over one in four attacks start inside corporate networks. These insider threats can be devastating, especially if employees have privileged accounts. Plus, threats may go undetected for months if companies aren’t looking inward.

Enterprises need a new way to break bad behavior that takes the guesswork out of identifying accidental (or acrimonious) employee incidents. With that in mind, artificial intelligence (AI) may offer the next iteration of insider attack security.

Cyberattacks: Insider Threats by the Numbers

According to the report, the number of insider attacks varies significantly by sector. In manufacturing, just 13 percent of threats stem from insiders. In the public sector, 34 percent of all incidents start with authorized users. Health care tops the insider threats list with 56 percent of incidents tied to human error or intentional misuse.

In 17 percent of insider breaches, mistakes — rather than malice — were the underlying cause. Employees might send emails to the wrong recipient, improperly delete classified information or misconfigure privacy settings. While intention matters when it comes to discipline and long-term staffing decisions, it has no bearing on the impact of a data breach. Employees who mistakenly click on malicious links or open infected email attachments can subject organizations to the same types of IT disasters that stem from targeted outsider attacks.

The worst-case scenario when it comes to insider threats, according to ITWeb, is a hybrid attack that includes both internal and external actors. Described as a “toxic cocktail,” it’s incredibly difficult to detect and mitigate this type of incident.

IT Security: Need for Speed

The Department of Energy saw a 23 percent boost in cybersecurity spending in 2018, while the Nuclear Regulatory Commission received a 33 percent increase, according to GCN. But no matter how much money organizations invest in cybersecurity, humans remain the weak link in the chain. GCN suggests moving IT security “from human to machine speed” to both detect and resolve potential issues.

Insider threats also took center stage at the 2018 RSA Conference. Juniper Networks’ CEO, Rami Rahim, spoke about the “unfair advantage” criminals enjoy because of the internet since it eliminates the typical constraints of time, distance and identity.

So, it’s no surprise industry experts like Randy Trzeciak of the CERT Insider Threat Center see a role for AI in defending corporate networks against insider threats. Trzeciak noted in a 2018 RSA Conference interview with BankInfoSecurity that “insiders who defraud organizations exhibit consistent potential risk indicators.”

AI offers a way to detect these potential risk patterns more quickly without the inherent bias of human observers — which is critical given the nature of insider attacks. Since these attacks stem from authorized access, organizations may not realize they’ve been breached until the damage is done.

Teaching AI Technology

AI assisting security professionals makes sense in theory, but what does this look like in practice? According to VentureBeat, training is an essential part of the equation. For cybersecurity controls, this means teaching AI to recognize typical patterns of insider threat behavior effectively. These might include regular file transfers off corporate networks onto physical media or private email accounts — or strange account activity that doesn’t coincide with regular work shifts. Individually, these signs could be outliers. But when detected in concert by AI tools, they’re a cause for concern.

Also concerning is the double-edged nature of intelligence tools. As noted by Health IT Security, AI could be used to both bolster and undermine health data security. There’s also an emerging category of adversarial AI tools designed to automatically infiltrate networks and custom-design attack vectors that can compromise security.

The philosophy of AI development also matters. As shown by recent experiments that released AI-enabled bots into the world of social media, artificial intelligence tools can learn the wrong lessons just as easily as the right ones.

What does this mean for AI as insider defense?

Applied Learning

Insider threats are now a top priority for organizations. Despite good intentions, employees may unwittingly expose critical systems to malware, ransomware or other emerging threats. Given the sheer number of mobile- and cloud-based endpoints, it’s impossible for human security experts to keep pace with both internal and external threats, especially when inside actors may go undetected.

AI offers a way to detect common patterns of compromise and network abuse, restrict access as applicable and report actions taken to IT professionals. The next step toward breaking bad behavior is to implement AI and train it to recognize key patterns, disregard signal noise and accelerate security from human to machine speed.

Learn more about adversarial AI and the IBM Adversarial Robustness Toolbox (ART)

More from Artificial Intelligence

Tackling Today’s Attacks and Preparing for Tomorrow’s Threats: A Leader in 2022 Gartner® Magic Quadrant™ for SIEM

Get the latest on IBM Security QRadar SIEM, recognized as a Leader in the 2022 Gartner Magic Quadrant. As I talk to security leaders across the globe, four main themes teams constantly struggle to keep up with are: The ever-evolving and increasing threat landscape Access to and retaining skilled security analysts Learning and managing increasingly complex IT environments and subsequent security tooling The ability to act on the insights from their security tools including security information and event management software…

4 Ways AI Capabilities Transform Security

Many industries have had to tighten belts in the "new normal". In cybersecurity, artificial intelligence (AI) can help.   Every day of the new normal we learn how the pandemic sped up digital transformation, as reflected in the new opportunities and new risks. For many, organizational complexity and legacy infrastructure and support processes are the leading barriers to the effectiveness of their security.   Adding to the dynamics, short-handed teams are overwhelmed with too much data from disparate sources and…

What’s New in the 2022 Cost of a Data Breach Report

The average cost of a data breach reached an all-time high of $4.35 million this year, according to newly published 2022 Cost of a Data Breach Report, an increase of 2.6% from a year ago and 12.7% since 2020. New research in this year’s report also reveals for the first time that 83% of organizations in the study have experienced more than one data breach and just 17% said this was their first data breach. And at a time when…

Real Security Concerns Are Scarier Than Doomsday Predictions

The metaverse, artificial intelligence (AI) run amok, the singularity ... many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks. While nobody can predict the future, should we worry about any of these issues? What's the difference between a real threat and hype? The Promise of the Metaverse…