September 25, 2024 By Jennifer Gregory 3 min read

If you’re wondering whether new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: Probably. Even more so with the increased use of AI tools in the workplace.

A recent Deloitte study found more than 60% of knowledge workers use AI tools at work. While the tools bring many benefits, especially improved productivity, experts agree they add more risk. According to the NSA Cybersecurity Director Dave Luber, AI brings unprecedented opportunities while also presenting opportunities for malicious activity.  Many of the common tools lack important defenses and protections.

The risk is already on the radar for many organizations. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years. Additionally, 47% of executives are concerned that adopting generative AI in operations will lead to new kinds of attacks targeting their own AI models, data, or services.

What are the cybersecurity risks associated with using gen AI tools?

Earlier this year, the NSA Artificial Intelligence Security Center (AISC) issued a Best Practices for Deploying Secure and Resilient AI Systems Cybersecurity Information Sheet (CSI) to help organizations understand the risks and adopt best practices to reduce vulnerabilities. For the CSI, the NSA partnered with the FBI, CISA, the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK).

“Malicious actors targeting AI systems may use attack vectors unique to AI systems, as well as standard techniques used against traditional IT. Due to the large variety of attack vectors, defenses need to be diverse and comprehensive. Advanced malicious actors often combine multiple vectors to execute operations that are more complex. Such combinations can more effectively penetrate layered defenses,” stated the CSI.

Explore AI cybersecurity solutions

Here are common ways that gen AI tools increase cybersecurity risk:

  • More accurate social engineering threats: Because generative AI tools record data entered into the system, threat actors can use the data to design realistic social engineering attacks. By entering prompts that pull data stored for training purposes, cyber criminals can quickly design a phishing email that is more likely to be effective. To reduce this risk, companies should disable the tools using data for training purposes or consider using proprietary tools.
  • Expanding the threat area for insider attacks: While proprietary systems reduce some risk, they also make it easier for insiders to leak data due to the larger surface area of data. Additionally, insider knowledge may make it possible to get around audit trails by knowing how the logging and monitoring systems work on the proprietary systems, which are typically less robust than commercial products.
  • Leaking data through chatbots: Many companies use generative AI to create both internal and externally used chatbots. However, these tools can be hacked and then used to leak sensitive data, even proprietary secrets or company financial data.

How can organizations reduce their risk?

Because generative AI is a powerful tool that can provide significant benefits throughout the organizations, organizations should focus on reducing risk instead of eliminating use.

Here are some best practices from the NSA CSI:

  • Validate the AI system before and during use. Consider using one of the many methods available, such as cryptographic methods, digital signatures or checksums. You can then confirm each artifact’s origin and integrity from unauthorized use.
  • Ensure a robust deployment environment architecture. Establish security protections for the boundaries between the IT environment and the AI system. You should also identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Other areas of focus should be addressing blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies.
  • Secure exposed APIs. Secure exposes application programming interfaces (APIs) by implementing authentication and authorization mechanisms for API access.

As generative AI continues to develop in both functionality and use cases, organizations must carefully watch cybersecurity trends and best practices. By proactively taking precautions to reduce risk, organizations can get productivity gains while reducing risk.

More from News

Research finds 56% increase in active ransomware groups

4 min read - Any good news is welcomed when evaluating cyber crime trends year-over-year. Over the last two years, IBM’s Threat Index Reports have provided some minor reprieve in this area by showing a gradual decline in the prevalence of ransomware attacks — now accounting for only 17% of all cybersecurity incidents compared to 21% in 2021. Unfortunately, it’s too early to know if this trendline will continue. A recent report released by Searchlight Cyber shows that there has been a 56% increase in…

Cyberattack on American Water: A warning to critical infrastructure

3 min read - American Water, the largest publicly traded United States water and wastewater utility, recently experienced a cybersecurity incident that forced the company to disconnect key systems, including its customer billing platform. As the company’s investigation continues, there are growing concerns about the vulnerabilities that persist in the water sector, which has increasingly become a target for cyberattacks. The breach is a stark reminder of the critical infrastructure risks that have long plagued the industry. While the water utility has confirmed that…

CISA and FBI release secure by design alert on cross-site scripting 

3 min read - CISA and the FBI are increasingly focusing on proactive cybersecurity and cyber resilience measures. Conjointly, the agencies recently released a new Secure by Design alert aimed at eliminating cross-site Scripting (XSS) vulnerabilities, which have long been exploited to compromise both data and user trust. Cross-site scripting vulnerabilities occur when a web application improperly handles user input, allowing attackers to inject malicious scripts into web pages that are then executed by unsuspecting users. These vulnerabilities are dangerous because they don't attack…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today