July 11, 2018 By Douglas Bonderud 3 min read

Machine learning and artificial intelligence (AI) are transitioning from proof-of-concept programs to functional corporate infrastructure. As spending on these technologies continues to drastically rise, their expanding prevalence is all but inevitable.

But the adoption of digital intelligence introduces new risk: IT experts face a steep slope of adaptations while cybercriminals look for ways to compromise new tools.

Could adversarial AI become the newest insider threat?

Why AI Won’t Replace Human Expertise

Security teams are overworked and understaffed, but some still worry that AI tools will eventually replace human expertise. In response to these concerns, Phys.org noted in June 2018 that discussions about artificial intelligence and automation are “dominated by either doomsayers who fear robots will supplant humans in the workforce or optimists who think there’s nothing new under the sun.”

New research, however, suggests that these technologies are better suited to replace specific tasks within jobs rather than wiping out occupations en masse. As reported by The Verge in June 2018, a pilot plan by the U.S. Army will leverage machine learning to better predict when vehicles need repair — taking some of the pressure off of human technicians while reducing total cost.

The same is possible in IT security: Using intelligent tools for the heavy lifting of maintenance and data collection and freeing up technology professionals for other tasks.

Will Machine Learning Reduce or Multiply Insider Breaches?

Though new technology likely won’t be stealing jobs, it could boost the risk of an insider breach. All companies are vulnerable to insider threats, which can take the form of deliberate actions to steal data or unintentional oversharing of corporate information. Since AI and machine learning tools lack human traits that underpin these risks, they should naturally produce a safer environment.

As noted by CSO Online in January 2018, however, malicious actors could leverage the same technologies to create unwitting insider threats by poisoning data pools. By tampering with data inputs, attackers also compromise outputs — which companies may not realize until it’s too late.

According to a May 2018 Medium report, meanwhile, there’s a subtler class of attacks on the rise: adversarial sampling. By creating fake samples that exist on the boundary of AI decision-making capabilities, cybercriminals may be able to force recurring misclassification, compromising the underlying trust of machine learning models in turn.

How to Thwart AI-Powered Insider Threats

With the adoption of intelligent tools on the rise, how can companies safeguard against more powerful insider threats?

Best practices include:

  • Creating human partnerships: These new tools work best in specific-task situations. By pairing any new learning tools with a human counterpart, companies create an additional line of defense against potential compromise.
  • Developing checks and balances: Does reported data match observations? Has it been independently verified? As more critical decision-making is handed off to AI and automation, enterprises must develop check-and-balance systems that compare outputs to reliable baseline data.
  • Deploying tools with a purpose: In many ways, the rise of intelligent technologies mirrors that of the cloud. At first an outlier, the solution quickly became a must-have to enable digital transition. There is potential for a similar more-is-better tendency here, but this overlooks the key role of AI and machine learning as a way to address specific pain points rather than simply keep up with the Joneses. Start small by finding a data-driven problem that could benefit from the implementation of intelligence technologies. Think of it like the zero-trust model for data access: It’s easier to contain potential compromise when the attack surface is inherently limited.

Machine learning and AI tools are gaining corporate support, and fortunately, they’re not likely to supplant the IT workforce. Looking forward, human aid will in fact be essential to proactively address the potential for next-generation insider threats empowered by compromised learning tools and adversarial AI.

More from

Hive0137 and AI-supplemented malware distribution

12 min read - IBM X-Force tracks dozens of threat actor groups. One group in particular, tracked by X-Force as Hive0137, has been a highly active malware distributor since at least October 2023. Nominated by X-Force as having the “Most Complex Infection Chain” in a campaign in 2023, Hive0137 campaigns deliver DarkGate, NetSupport, T34-Loader and Pikabot malware payloads, some of which are likely used for initial access in ransomware attacks. The crypters used in the infection chains also suggest a close relationship with former…

Unveiling the latest banking trojan threats in LATAM

9 min read - This post was made possible through the research contributions of Amir Gendler.In our most recent research in the Latin American (LATAM) region, we at IBM Security Lab have observed a surge in campaigns linked with malicious Chrome extensions. These campaigns primarily target Latin America, with a particular emphasis on its financial institutions.In this blog post, we’ll shed light on the group responsible for disseminating this campaign. We’ll delve into the method of web injects and Man in the Browser, and…

Crisis communication: What NOT to do

4 min read - Read the 1st blog in this series, Cybersecurity crisis communication: What to doWhen an organization experiences a cyberattack, tensions are high, customers are concerned and the business is typically not operating at full capacity. Every move you make at this point makes a difference to your company’s future, and even a seemingly small mistake can cause permanent reputational damage.Because of the stress and many moving parts that are involved, businesses often fall short when it comes to communication in a crisis.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today