September 25, 2024 By Jennifer Gregory 3 min read

If you’re wondering whether new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: Probably. Even more so with the increased use of AI tools in the workplace.

A recent Deloitte study found more than 60% of knowledge workers use AI tools at work. While the tools bring many benefits, especially improved productivity, experts agree they add more risk. According to the NSA Cybersecurity Director Dave Luber, AI brings unprecedented opportunities while also presenting opportunities for malicious activity.  Many of the common tools lack important defenses and protections.

The risk is already on the radar for many organizations. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years. Additionally, 47% of executives are concerned that adopting generative AI in operations will lead to new kinds of attacks targeting their own AI models, data, or services.

What are the cybersecurity risks associated with using gen AI tools?

Earlier this year, the NSA Artificial Intelligence Security Center (AISC) issued a Best Practices for Deploying Secure and Resilient AI Systems Cybersecurity Information Sheet (CSI) to help organizations understand the risks and adopt best practices to reduce vulnerabilities. For the CSI, the NSA partnered with the FBI, CISA, the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).

“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses,” stated the CSI.

Explore AI cybersecurity solutions

Here are common ways that gen AI tools increase cybersecurity risk:

  • More accurate social engineering threats: Because generative AI tools record data entered into the system, threat actors can use the data to design realistic social engineering attacks. By entering prompts that pull data stored for training purposes, cyber criminals can quickly design a phishing email that is more likely to be effective. To reduce this risk, companies should disable the tools using data for training purposes or consider using proprietary tools.
  • Expanding the threat area for insider attacks: While proprietary systems reduce some risk, they also make it easier for insiders to leak data due to the larger surface area of data. Additionally, insider knowledge may make it possible to get around audit trails by knowing how the logging and monitoring systems work on the proprietary systems, which are typically less robust than commercial products.
  • Leaking data through chatbots: Many companies use generative AI to create both internal and externally used chatbots. However, these tools can be hacked and then used to leak sensitive data, even proprietary secrets or company financial data.

How can organizations reduce their risk?

Because generative AI is a powerful tool that can provide significant benefits throughout the organizations, organizations should focus on reducing risk instead of eliminating use.

Here are some best practices from the NSA CSI:

  • Validate the AI system before and during use. Consider using one of the many methods available, such as cryptographic methods, digital signatures or checksums. You can then confirm each artifact’s origin and integrity from unauthorized use.
  • Ensure a robust deployment environment architecture. Establish security protections for the boundaries between the IT environment and the AI system. You should also identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Other areas of focus should be addressing blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies.
  • Secure exposed APIs. Secure exposes application programming interfaces (APIs) by implementing authentication and authorization mechanisms for API access.

As generative AI continues to develop in both functionality and use cases, organizations must carefully watch cybersecurity trends and best practices. By proactively taking precautions to reduce risk, organizations can get productivity gains while reducing risk.

More from News

CISA warns about credential access in FY23 risk & vulnerability assessment

3 min read - CISA released its Fiscal Year 2023 (FY23) Risk and Vulnerability Assessments (RVA) Analysis, providing a crucial look into the tactics and techniques threat actors employed to compromise critical infrastructure. The report is part of the agency’s ongoing effort to improve national cybersecurity through assessments of vulnerabilities in key sectors. Meanwhile, IBM’s X-Force Threat Intelligence Index 2024 has identified credential access as one of the most significant risks to organizations. Both reports shed light on the persistent and growing threat of…

CISA launches portal to simplify cyber incident reporting

2 min read - Information sharing just got more efficient. In August, the Cybersecurity and Infrastructure Security Agency (CISA) launched the CISA Services Portal. “The new CISA Services Portal improves the reporting process and offers more features for our voluntary reporters. We ask organizations reporting an incident to provide information on the impacted entity, contact information, description of the incident, technical indications and steps taken,” a CISA spokesperson said in an email statement. “Reported incidents enable CISA and our partners to help victims mitigate…

FYSA – Critical RCE Flaw in GNU-Linux Systems

2 min read - Summary The first of a series of blog posts has been published detailing a vulnerability in the Common Unix Printing System (CUPS), which purportedly allows attackers to gain remote access to UNIX-based systems. The vulnerability, which affects various UNIX-based operating systems, can be exploited by sending a specially crafted HTTP request to the CUPS service. Threat Topography Threat Type: Remote code execution vulnerability in CUPS service Industries Impacted: UNIX-based systems across various industries, including but not limited to, finance, healthcare,…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today