Cyberattacks are on the rise as ransomware continues to plague companies across all industries and malicious actors look to nab bitcoin payouts and steal personal data. The first quarter of 2018 also saw a spike in both distributed denial-of-service (DDoS) attack volume and duration.

But despite the prevalence of these external threats, a February 2018 report found over one in four attacks start inside corporate networks. These insider threats can be devastating, especially if employees have privileged accounts. Plus, threats may go undetected for months if companies aren’t looking inward.

Enterprises need a new way to break bad behavior that takes the guesswork out of identifying accidental (or acrimonious) employee incidents. With that in mind, artificial intelligence (AI) may offer the next iteration of insider attack security.

Cyberattacks: Insider Threats by the Numbers

According to the report, the number of insider attacks varies significantly by sector. In manufacturing, just 13 percent of threats stem from insiders. In the public sector, 34 percent of all incidents start with authorized users. Health care tops the insider threats list with 56 percent of incidents tied to human error or intentional misuse.

In 17 percent of insider breaches, mistakes — rather than malice — were the underlying cause. Employees might send emails to the wrong recipient, improperly delete classified information or misconfigure privacy settings. While intention matters when it comes to discipline and long-term staffing decisions, it has no bearing on the impact of a data breach. Employees who mistakenly click on malicious links or open infected email attachments can subject organizations to the same types of IT disasters that stem from targeted outsider attacks.

The worst-case scenario when it comes to insider threats, according to ITWeb, is a hybrid attack that includes both internal and external actors. Described as a “toxic cocktail,” it’s incredibly difficult to detect and mitigate this type of incident.

IT Security: Need for Speed

The Department of Energy saw a 23 percent boost in cybersecurity spending in 2018, while the Nuclear Regulatory Commission received a 33 percent increase, according to GCN. But no matter how much money organizations invest in cybersecurity, humans remain the weak link in the chain. GCN suggests moving IT security “from human to machine speed” to both detect and resolve potential issues.

Insider threats also took center stage at the 2018 RSA Conference. Juniper Networks’ CEO, Rami Rahim, spoke about the “unfair advantage” criminals enjoy because of the internet since it eliminates the typical constraints of time, distance and identity.

So, it’s no surprise industry experts like Randy Trzeciak of the CERT Insider Threat Center see a role for AI in defending corporate networks against insider threats. Trzeciak noted in a 2018 RSA Conference interview with BankInfoSecurity that “insiders who defraud organizations exhibit consistent potential risk indicators.”

AI offers a way to detect these potential risk patterns more quickly without the inherent bias of human observers — which is critical given the nature of insider attacks. Since these attacks stem from authorized access, organizations may not realize they’ve been breached until the damage is done.

Teaching AI Technology

AI assisting security professionals makes sense in theory, but what does this look like in practice? According to VentureBeat, training is an essential part of the equation. For cybersecurity controls, this means teaching AI to recognize typical patterns of insider threat behavior effectively. These might include regular file transfers off corporate networks onto physical media or private email accounts — or strange account activity that doesn’t coincide with regular work shifts. Individually, these signs could be outliers. But when detected in concert by AI tools, they’re a cause for concern.

Also concerning is the double-edged nature of intelligence tools. As noted by Health IT Security, AI could be used to both bolster and undermine health data security. There’s also an emerging category of adversarial AI tools designed to automatically infiltrate networks and custom-design attack vectors that can compromise security.

The philosophy of AI development also matters. As shown by recent experiments that released AI-enabled bots into the world of social media, artificial intelligence tools can learn the wrong lessons just as easily as the right ones.

What does this mean for AI as insider defense?

Applied Learning

Insider threats are now a top priority for organizations. Despite good intentions, employees may unwittingly expose critical systems to malware, ransomware or other emerging threats. Given the sheer number of mobile- and cloud-based endpoints, it’s impossible for human security experts to keep pace with both internal and external threats, especially when inside actors may go undetected.

AI offers a way to detect common patterns of compromise and network abuse, restrict access as applicable and report actions taken to IT professionals. The next step toward breaking bad behavior is to implement AI and train it to recognize key patterns, disregard signal noise and accelerate security from human to machine speed.

Learn more about adversarial AI and the IBM Adversarial Robustness Toolbox (ART)

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today