July 11, 2018 By Douglas Bonderud 3 min read

Machine learning and artificial intelligence (AI) are transitioning from proof-of-concept programs to functional corporate infrastructure. As spending on these technologies continues to drastically rise, their expanding prevalence is all but inevitable.

But the adoption of digital intelligence introduces new risk: IT experts face a steep slope of adaptations while cybercriminals look for ways to compromise new tools.

Could adversarial AI become the newest insider threat?

Why AI Won’t Replace Human Expertise

Security teams are overworked and understaffed, but some still worry that AI tools will eventually replace human expertise. In response to these concerns, Phys.org noted in June 2018 that discussions about artificial intelligence and automation are “dominated by either doomsayers who fear robots will supplant humans in the workforce or optimists who think there’s nothing new under the sun.”

New research, however, suggests that these technologies are better suited to replace specific tasks within jobs rather than wiping out occupations en masse. As reported by The Verge in June 2018, a pilot plan by the U.S. Army will leverage machine learning to better predict when vehicles need repair — taking some of the pressure off of human technicians while reducing total cost.

The same is possible in IT security: Using intelligent tools for the heavy lifting of maintenance and data collection and freeing up technology professionals for other tasks.

Will Machine Learning Reduce or Multiply Insider Breaches?

Though new technology likely won’t be stealing jobs, it could boost the risk of an insider breach. All companies are vulnerable to insider threats, which can take the form of deliberate actions to steal data or unintentional oversharing of corporate information. Since AI and machine learning tools lack human traits that underpin these risks, they should naturally produce a safer environment.

As noted by CSO Online in January 2018, however, malicious actors could leverage the same technologies to create unwitting insider threats by poisoning data pools. By tampering with data inputs, attackers also compromise outputs — which companies may not realize until it’s too late.

According to a May 2018 Medium report, meanwhile, there’s a subtler class of attacks on the rise: adversarial sampling. By creating fake samples that exist on the boundary of AI decision-making capabilities, cybercriminals may be able to force recurring misclassification, compromising the underlying trust of machine learning models in turn.

How to Thwart AI-Powered Insider Threats

With the adoption of intelligent tools on the rise, how can companies safeguard against more powerful insider threats?

Best practices include:

  • Creating human partnerships: These new tools work best in specific-task situations. By pairing any new learning tools with a human counterpart, companies create an additional line of defense against potential compromise.
  • Developing checks and balances: Does reported data match observations? Has it been independently verified? As more critical decision-making is handed off to AI and automation, enterprises must develop check-and-balance systems that compare outputs to reliable baseline data.
  • Deploying tools with a purpose: In many ways, the rise of intelligent technologies mirrors that of the cloud. At first an outlier, the solution quickly became a must-have to enable digital transition. There is potential for a similar more-is-better tendency here, but this overlooks the key role of AI and machine learning as a way to address specific pain points rather than simply keep up with the Joneses. Start small by finding a data-driven problem that could benefit from the implementation of intelligence technologies. Think of it like the zero-trust model for data access: It’s easier to contain potential compromise when the attack surface is inherently limited.

Machine learning and AI tools are gaining corporate support, and fortunately, they’re not likely to supplant the IT workforce. Looking forward, human aid will in fact be essential to proactively address the potential for next-generation insider threats empowered by compromised learning tools and adversarial AI.

More from

CISA releases landmark cyber incident reporting proposal

2 min read - Due to ongoing cyberattacks and threats, critical infrastructure organizations have been on high alert. Now, the Cybersecurity and Infrastructure Security Agency (CISA) has introduced a draft of landmark regulation outlining how organizations will be required to report cyber incidents to the federal government.The 447-page Notice of Proposed Rulemaking (NPRM) has been released and is open for public feedback through the Federal Register. CISA was required to develop this report by the Cyber Incident Reporting for Critical Infrastructure Act of 2022…

Ransomware payouts hit all-time high, but that’s not the whole story

3 min read - Ransomware payments hit an all-time high of $1.1 billion in 2023, following a steep drop in total payouts in 2022. Some factors that may have contributed to the decline in 2022 were the Ukraine conflict, fewer victims paying ransoms and cyber group takedowns by legal authorities.In 2023, however, ransomware payouts came roaring back to set a new all-time record. During 2023, nefarious actors targeted high-profile institutions and critical infrastructure, including hospitals, schools and government agencies.Still, it’s not all roses for…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today