September 25, 2024 By Jennifer Gregory 3 min read

If you’re wondering whether new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: Probably. Even more so with the increased use of AI tools in the workplace.

A recent Deloitte study found more than 60% of knowledge workers use AI tools at work. While the tools bring many benefits, especially improved productivity, experts agree they add more risk. According to the NSA Cybersecurity Director Dave Luber, AI brings unprecedented opportunities while also presenting opportunities for malicious activity.  Many of the common tools lack important defenses and protections.

The risk is already on the radar for many organizations. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years. Additionally, 47% of executives are concerned that adopting generative AI in operations will lead to new kinds of attacks targeting their own AI models, data, or services.

What are the cybersecurity risks associated with using gen AI tools?

Earlier this year, the NSA Artificial Intelligence Security Center (AISC) issued a Best Practices for Deploying Secure and Resilient AI Systems Cybersecurity Information Sheet (CSI) to help organizations understand the risks and adopt best practices to reduce vulnerabilities. For the CSI, the NSA partnered with the FBI, CISA, the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).

“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses,” stated the CSI.

Explore AI cybersecurity solutions

Here are common ways that gen AI tools increase cybersecurity risk:

  • More accurate social engineering threats: Because generative AI tools record data entered into the system, threat actors can use the data to design realistic social engineering attacks. By entering prompts that pull data stored for training purposes, cyber criminals can quickly design a phishing email that is more likely to be effective. To reduce this risk, companies should disable the tools using data for training purposes or consider using proprietary tools.
  • Expanding the threat area for insider attacks: While proprietary systems reduce some risk, they also make it easier for insiders to leak data due to the larger surface area of data. Additionally, insider knowledge may make it possible to get around audit trails by knowing how the logging and monitoring systems work on the proprietary systems, which are typically less robust than commercial products.
  • Leaking data through chatbots: Many companies use generative AI to create both internal and externally used chatbots. However, these tools can be hacked and then used to leak sensitive data, even proprietary secrets or company financial data.

How can organizations reduce their risk?

Because generative AI is a powerful tool that can provide significant benefits throughout the organizations, organizations should focus on reducing risk instead of eliminating use.

Here are some best practices from the NSA CSI:

  • Validate the AI system before and during use. Consider using one of the many methods available, such as cryptographic methods, digital signatures or checksums. You can then confirm each artifact’s origin and integrity from unauthorized use.
  • Ensure a robust deployment environment architecture. Establish security protections for the boundaries between the IT environment and the AI system. You should also identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Other areas of focus should be addressing blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies.
  • Secure exposed APIs. Secure exposes application programming interfaces (APIs) by implementing authentication and authorization mechanisms for API access.

As generative AI continues to develop in both functionality and use cases, organizations must carefully watch cybersecurity trends and best practices. By proactively taking precautions to reduce risk, organizations can get productivity gains while reducing risk.

More from News

FYSA – Critical RCE Flaw in GNU-Linux Systems

2 min read - Summary The first of a series of blog posts has been published detailing a vulnerability in the Common Unix Printing System (CUPS), which purportedly allows attackers to gain remote access to UNIX-based systems. The vulnerability, which affects various UNIX-based operating systems, can be exploited by sending a specially crafted HTTP request to the CUPS service. Threat Topography Threat Type: Remote code execution vulnerability in CUPS service Industries Impacted: UNIX-based systems across various industries, including but not limited to, finance, healthcare,…

Salesforce acquires Own Company

2 min read - How important is data protection and data management these days? It’s important enough that Salesforce recently announced it acquired Own Company, a leading provider of data protection and data management solutions, for $1.9 billion in cash.What motivated Salesforce to make the purchase? “Data security has never been more critical, and Own’s proven expertise and products will enhance our ability to offer robust data protection and management solutions to our customers,” said Steve Fisher, President and GM of Salesforce's Einstein 1…

The rising threat of cyberattacks in the restaurant industry

2 min read - The restaurant industry has been hit with a rising number of cyberattacks in the last two years, with major fast-food chains as the primary targets. Here’s a summary of the kinds of attacks to strike this industry and what happened afterward. Data breaches have been a significant issue, with several large restaurant chains experiencing incidents that compromised the sensitive information of both employees and customers. In one notable case, a breach affected 183,000 people, exposing names, Social Security numbers, driver's…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today