If you’re wondering whether new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: Probably. Even more so with the increased use of AI tools in the workplace.
A recent Deloitte study found more than 60% of knowledge workers use AI tools at work. While the tools bring many benefits, especially improved productivity, experts agree they add more risk. According to the NSA Cybersecurity Director Dave Luber, AI brings unprecedented opportunities while also presenting opportunities for malicious activity. Many of the common tools lack important defenses and protections.
The risk is already on the radar for many organizations. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years. Additionally, 47% of executives are concerned that adopting generative AI in operations will lead to new kinds of attacks targeting their own AI models, data, or services.
What are the cybersecurity risks associated with using gen AI tools?
Earlier this year, the NSA Artificial Intelligence Security Center (AISC) issued a Best Practices for Deploying Secure and Resilient AI Systems Cybersecurity Information Sheet (CSI) to help organizations understand the risks and adopt best practices to reduce vulnerabilities. For the CSI, the NSA partnered with the FBI, CISA, the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).
“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses,” stated the CSI.
Explore AI cybersecurity solutions
Here are common ways that gen AI tools increase cybersecurity risk:
- More accurate social engineering threats: Because generative AI tools record data entered into the system, threat actors can use the data to design realistic social engineering attacks. By entering prompts that pull data stored for training purposes, cyber criminals can quickly design a phishing email that is more likely to be effective. To reduce this risk, companies should disable the tools using data for training purposes or consider using proprietary tools.
- Expanding the threat area for insider attacks: While proprietary systems reduce some risk, they also make it easier for insiders to leak data due to the larger surface area of data. Additionally, insider knowledge may make it possible to get around audit trails by knowing how the logging and monitoring systems work on the proprietary systems, which are typically less robust than commercial products.
- Leaking data through chatbots: Many companies use generative AI to create both internal and externally used chatbots. However, these tools can be hacked and then used to leak sensitive data, even proprietary secrets or company financial data.
How can organizations reduce their risk?
Because generative AI is a powerful tool that can provide significant benefits throughout the organizations, organizations should focus on reducing risk instead of eliminating use.
Here are some best practices from the NSA CSI:
- Validate the AI system before and during use. Consider using one of the many methods available, such as cryptographic methods, digital signatures or checksums. You can then confirm each artifact’s origin and integrity from unauthorized use.
- Ensure a robust deployment environment architecture. Establish security protections for the boundaries between the IT environment and the AI system. You should also identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Other areas of focus should be addressing blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies.
- Secure exposed APIs. Secure exposes application programming interfaces (APIs) by implementing authentication and authorization mechanisms for API access.
As generative AI continues to develop in both functionality and use cases, organizations must carefully watch cybersecurity trends and best practices. By proactively taking precautions to reduce risk, organizations can get productivity gains while reducing risk.