3 min read
Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.
According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they’re following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.
One thing they have in common? Challenges with data security. Despite their success with AI and ML, security remains the top concern. Here’s why.
Historically, computers did what they were told. Thinking outside the box wasn’t an option — lines of code dictated what was possible and permissible.
AI and ML models take a different approach. Instead of rigid structures, AI and ML models are given general guidelines. Companies supply vast amounts of training data that help these models “learn,” in turn improving their output.
A simple example is an AI tool designed to identify images of dogs. The underlying ML structures provide basic guidance — dogs have four legs, two ears, a tail and fur. Thousands of images of both dogs and not-dogs are provided to AI. The more pictures it “sees,” the better it becomes at differentiating dogs.
Industry newsletter
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
If attackers can gain access to AI models, they can modify model outputs. Consider the example above. Malicious actors compromise business networks and flood training models with unlabeled images of cats and images incorrectly labeled as dogs. Over time, model accuracy suffers and outputs are no longer reliable.
Forbes highlights a recent competition that saw hackers trying to “jailbreak” popular AI models and trick them into producing inaccurate or harmful content. The rise of generative tools makes this kind of protection a priority — in 2023, researchers discovered that by simply adding strings of random symbols to the end of queries, they could convince generative AI (gen AI) tools to provide answers that bypassed model safety filters.
And this concern isn’t just conceptual. As noted by The Hacker News, an attack technique known as “Sleepy Pickle” poses significant risks for ML models. By inserting a malicious payload into pickle files — used to serialize Python object structures — attackers can change how models weigh and compare data and alter model outputs. This could allow them to generate misinformation that causes harm to users, steal user data or generate content that contains malicious links.
To reduce the risk of compromised AI and ML, three components are critical:
Accurate, timely and reliable data underpins usable model outputs. The process of centralizing and correlating this data, however, creates a tempting target for attackers. If they can infiltrate large-scale AI data storage, they can manipulate model outputs.
As a result, enterprises need solutions that automatically and continuously monitor AI infrastructure for signs of compromise.
Changes to AI and ML models can lead to outputs that look legitimate but have been modified by attackers. At best, these outputs inconvenience customers and slow down business processes. At worst, they could negatively impact both reputation and revenue.
To reduce the risk of model manipulation, organizations need tools capable of identifying security vulnerabilities and detecting misconfigurations.
Who’s using models? With what data? And for what purpose? Even if data and models are secured, use by malicious actors may put companies at risk. Continuous compliance monitoring is critical to ensure legitimate use.
AI and ML tools can help enterprises discover data insights and drive increased revenue. If compromised, however, models can be used to deliver inaccurate outputs or deploy malicious code.
With Guardium AI security, businesses are better equipped to manage the security risks of sensitive models. See how.
IBM web domains
ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com, openliberty.io