5 min read
The rapid rise of generative artificial intelligence (gen AI) technologies has ushered in a transformative era for industries worldwide. Over the past 18 months, enterprises have increasingly integrated gen AI into their operations, leveraging its potential to innovate and streamline processes. From automating customer service to enhancing product development, the applications of gen AI are vast and impactful. According to a recent IBM report, approximately 42% of large enterprises have adopted AI, with the technology capable of automating up to 30% of knowledge work activities in various sectors, including sales, marketing, finance and customer service.
However, the accelerated adoption of gen AI also brings significant risks, such as inaccuracy, intellectual property concerns and cybersecurity threats. Of course, this is only one instance in a series of enterprises adopting new technology, such as cloud computing, only to realize afterward that incorporating security principles should have been a priority from the start. Now, we can learn from those past missteps and adopt Secure by Design principles early while developing gen AI-based enterprise applications.
The recent wave of cloud adoption provides valuable insights into prioritizing security early in any technology transition. Many organizations embraced cloud technologies for benefits like cost reduction, scalability and disaster recovery. However, the haste to reap these benefits often led to oversights in security, resulting in high-profile breaches due to misconfigurations. The following chart shows the impact of these misconfigurations. It illustrates the cost and frequency of data breaches by initial attack vector, where cloud misconfigurations are shown to have a significant average cost of USD 3.98 million:
One notable incident occurred in 2023: A misconfigured cloud storage bucket exposed sensitive data from multiple companies, including personal information like email addresses and social security numbers. This breach highlighted the risks associated with improper cloud storage configurations and the financial impact due to reputational damage.
Similarly, a vulnerability in an enterprise workspace Software-as-a-Service (SaaS) application resulted in a major data breach in 2023, where unauthorized access was gained through an unsecured account. This brought to light the impact of inadequate account management and monitoring. These incidents, among many others (captured in the recently published IBM Cost of a Data Breach Report 2024), underline the critical need for a Secure by Design approach, ensuring that security measures are integral to these AI adoption programs from the very beginning.
As enterprises rapidly integrate gen AI into their operations, the importance of addressing security from the beginning cannot be overstated. AI technologies, while transformative, introduce new security vulnerabilities. Recent breaches related to AI platforms demonstrate these risks and their potential impact on businesses.
Here are some examples of AI-related security breaches in the last couple of months:
1. Deepfake scams: In one case, a UK energy firm’s CEO was duped into transferring USD 243,000, believing he was speaking with his boss. The scam utilized deepfake technology, highlighting the potential for AI-driven fraud.
2. Data poisoning attacks: Attackers can corrupt AI models by introducing malicious data during training, leading to erroneous outputs. This was seen when a cybersecurity firm’s machine learning model was compromised, causing delays in threat response.
3. AI model exploits: Vulnerabilities in AI applications, such as chatbots, have led to many incidents of unauthorized access to sensitive data. These breaches underscore the need for robust security measures around AI interfaces.
Industry newsletter
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
The consequences of AI security breaches are multifaceted:
As enterprises rapidly adopt their customer-facing applications to adopt gen AI technologies, it is important to have a structured approach to securing them to reduce the risk of having their businesses interrupted by cyber adversaries.
To effectively secure gen AI applications, enterprises should adopt a comprehensive security strategy that spans the entire AI lifecycle. There are three key stages:
1. Data collection and handling: Ensure the secure collection and handling of data, including encryption and strict access controls.
2. Model development and training: Implement secure practices during development, training and fine-tuning of AI models to protect against data poisoning and other attacks.
3. Model inference and live use: Monitor AI systems in real-time and ensure continuous security assessments to detect and mitigate potential threats.
These three stages should be considered alongside the Shared Responsibility model of a typical cloud-based AI platform (shown below).
In the IBM Framework for Securing Generative AI, you can find a detailed description of these three stages and security principles to follow. They are combined with cloud security controls at the underlying infrastructure layer, which runs large language models and applications.
The transition to gen AI enables enterprises to fuel innovation in their business applications, automate complex tasks and improve efficiency, accuracy and decision-making while reducing costs and increasing the speed and agility of their business processes.
As seen with the cloud adoption wave, prioritizing security from the beginning is crucial. By incorporating security measures into the AI adoption process early on, enterprises can convert past missteps into critical milestones and protect themselves from sophisticated cyber threats. This proactive approach ensures compliance with rapidly evolving AI regulatory requirements, protects enterprises and their client’s sensitive data and maintains the trust of stakeholders. This way, businesses can achieve their AI strategic goals securely and sustainably.
IBM offers comprehensive solutions to support enterprises in securely adopting AI technologies. Through consulting, security services and a robust AI security framework, IBM is helping organizations build and deploy AI applications at scale, ensuring transparency, ethics and compliance. IBM’s AI Security Discovery workshops are a critical first step, helping clients identify and mitigate security risks early in their AI adoption journey.
For more information, please check out these resources:
IBM web domains
ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com, openliberty.io