Cyberattacks are on the rise as ransomware continues to plague companies across all industries and malicious actors look to nab bitcoin payouts and steal personal data. The first quarter of 2018 also saw a spike in both distributed denial-of-service (DDoS) attack volume and duration.

But despite the prevalence of these external threats, a February 2018 report found over one in four attacks start inside corporate networks. These insider threats can be devastating, especially if employees have privileged accounts. Plus, threats may go undetected for months if companies aren’t looking inward.

Enterprises need a new way to break bad behavior that takes the guesswork out of identifying accidental (or acrimonious) employee incidents. With that in mind, artificial intelligence (AI) may offer the next iteration of insider attack security.

Cyberattacks: Insider Threats by the Numbers

According to the report, the number of insider attacks varies significantly by sector. In manufacturing, just 13 percent of threats stem from insiders. In the public sector, 34 percent of all incidents start with authorized users. Health care tops the insider threats list with 56 percent of incidents tied to human error or intentional misuse.

In 17 percent of insider breaches, mistakes — rather than malice — were the underlying cause. Employees might send emails to the wrong recipient, improperly delete classified information or misconfigure privacy settings. While intention matters when it comes to discipline and long-term staffing decisions, it has no bearing on the impact of a data breach. Employees who mistakenly click on malicious links or open infected email attachments can subject organizations to the same types of IT disasters that stem from targeted outsider attacks.

The worst-case scenario when it comes to insider threats, according to ITWeb, is a hybrid attack that includes both internal and external actors. Described as a “toxic cocktail,” it’s incredibly difficult to detect and mitigate this type of incident.

IT Security: Need for Speed

The Department of Energy saw a 23 percent boost in cybersecurity spending in 2018, while the Nuclear Regulatory Commission received a 33 percent increase, according to GCN. But no matter how much money organizations invest in cybersecurity, humans remain the weak link in the chain. GCN suggests moving IT security “from human to machine speed” to both detect and resolve potential issues.

Insider threats also took center stage at the 2018 RSA Conference. Juniper Networks’ CEO, Rami Rahim, spoke about the “unfair advantage” criminals enjoy because of the internet since it eliminates the typical constraints of time, distance and identity.

So, it’s no surprise industry experts like Randy Trzeciak of the CERT Insider Threat Center see a role for AI in defending corporate networks against insider threats. Trzeciak noted in a 2018 RSA Conference interview with BankInfoSecurity that “insiders who defraud organizations exhibit consistent potential risk indicators.”

AI offers a way to detect these potential risk patterns more quickly without the inherent bias of human observers — which is critical given the nature of insider attacks. Since these attacks stem from authorized access, organizations may not realize they’ve been breached until the damage is done.

Teaching AI Technology

AI assisting security professionals makes sense in theory, but what does this look like in practice? According to VentureBeat, training is an essential part of the equation. For cybersecurity controls, this means teaching AI to recognize typical patterns of insider threat behavior effectively. These might include regular file transfers off corporate networks onto physical media or private email accounts — or strange account activity that doesn’t coincide with regular work shifts. Individually, these signs could be outliers. But when detected in concert by AI tools, they’re a cause for concern.

Also concerning is the double-edged nature of intelligence tools. As noted by Health IT Security, AI could be used to both bolster and undermine health data security. There’s also an emerging category of adversarial AI tools designed to automatically infiltrate networks and custom-design attack vectors that can compromise security.

The philosophy of AI development also matters. As shown by recent experiments that released AI-enabled bots into the world of social media, artificial intelligence tools can learn the wrong lessons just as easily as the right ones.

What does this mean for AI as insider defense?

Applied Learning

Insider threats are now a top priority for organizations. Despite good intentions, employees may unwittingly expose critical systems to malware, ransomware or other emerging threats. Given the sheer number of mobile- and cloud-based endpoints, it’s impossible for human security experts to keep pace with both internal and external threats, especially when inside actors may go undetected.

AI offers a way to detect common patterns of compromise and network abuse, restrict access as applicable and report actions taken to IT professionals. The next step toward breaking bad behavior is to implement AI and train it to recognize key patterns, disregard signal noise and accelerate security from human to machine speed.

Learn more about adversarial AI and the IBM Adversarial Robustness Toolbox (ART)

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today