Organizations face many challenges regarding cybersecurity, including keeping up with the ever-evolving threat landscape and complying with regulatory requirements. In addition, the cybersecurity skill shortage makes it more difficult for organizations to adequately staff their risk and compliance functions. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, with 3.4 million more workers needed to secure assets effectively.

Organizations must employ technologies like artificial intelligence (AI), collaboration tools and analytics to cope with the situation efficiently. To that end, ChatGPT can be an enabler for organizational governance, risk and compliance (GRC) functions.

ChatGPT addresses typical GRC use cases

A significant advancement in natural language processing (NLP) technologies has allowed for more accurate and nuanced language analysis, leading to better and more reliable insights. By leveraging the power of NLP technology, ChatGPT, the AI-based chatbot developed by OpenAI, can generate coherent and relevant responses to a wide range of questions and topics. Some reasons for ChatGPT’s popularity include the ability to understand human language and context, tailor responses based on previous conversations and retrieve vast amounts of information. ChatGPT is powered by GPT-3.5-turbo, OpenAI’s most advanced language model. OpenAI has also developed various language models with different capabilities and use cases, like Codex, DALL-E, Ada and Davinci.

With ChatGPT, GRC analysts have a valuable tool to use in navigating the world of risk and compliance. Let’s explore some of the GRC use cases ChatGPT can address.

Generate framework, policy and procedure documents

GRC analysts can use ChatGPT to generate draft policy or procedure documents by supplying basic information and guidelines. It can then use its NLP capabilities to create a coherent, well-structured document that meets the company’s GRC management requirements.

Additionally, GRC analysts can use it to evaluate policy and procedure documents by inputting a completed document into the model. Analyzing policy documents, providing feedback on effectiveness and highlighting areas that might require revision can also support the analyst.

Using ChatGPT for policy and procedure document creation and evaluation saves GRC analysts time while improving the overall quality of the documents.

But while ChatGPT can create the initial draft, analysts must use human judgment and expertise to evaluate and refine it to ensure that the information is appropriate for the company’s specific needs and complies with all relevant regulations and standards.

Manage regulatory compliance

ChatGPT can be a valuable tool to help GRC analysts manage compliance and minimize the risk of fines and penalties. Recently, Meta was hit with $414 million in fines by the European Union’s leading privacy watchdog. While traditional methods of managing compliance may still be in use, they may not be able to keep up with the rapidly changing regulatory landscape. Hence, ChatGPT is valuable for the following use cases:

  • It can help by reviewing and analyzing vast amounts of data from agencies worldwide. It can filter the data, find the specific requirements applicable to the organization/sector/geography and assess gaps in the existing processes to provide recommendations. This systematic approach can give the organization confidence in its approach to monitoring and maintaining regulatory requirements. For example, an IT organization complying with environmental, social and governance (ESG) regulations can leverage the technology for detailed insight related to the EU’s Energy Efficiency Directive for data centers or addressing modern slavery and human trafficking disclosure requirements under the UK’s Modern Slavery Act.
  • It can analyze compliance-related communications, such as articles, regulatory filings, emails and chat messages, for potential compliance risks. The model could be trained to detect keywords that indicate potential compliance issues and flag them for further review. However, ChatGPT is a chatbot, not a regulatory compliance tool. While it can analyze and understand text data related to compliance and regulation, it cannot actively monitor a company’s compliance or regulatory status.

Enhance risk assessment

ChatGPT’s vast knowledge of various industries and risk data should be leveraged to identify relevant risk factors. Risk managers can share information, such as incident reports, audit reports and regulatory filings, to identify potential risks. Analyzing this information will help the risk manager evaluate the risk’s impact quickly and accurately. For example, ChatGPT could be trained to analyze social media posts related to customer complaints to identify common patterns. It could then assess the likelihood and potential impact of those complaints on the company’s reputation.

The platform can also generate risk assessment reports that identify potential risks and provide recommendations for mitigation.

As an AI language model, ChatGPT can also potentially identify risks from network architecture diagrams. It may not be able to analyze the diagram itself, as it is an image, not text. Still, it can analyze textual descriptions and labels within a network architecture diagram to identify potential risks and vulnerabilities.

Improve fraud detection

Fraud is a significant risk for most organizations and is difficult to detect, especially when dealing with large volumes of data. ChatGPT could help process text data looking for potential fraud. A few examples include:

  • Emails: By analyzing emails, companies can identify patterns of communication that may indicate fraud, such as employees communicating with known fraudulent actors or discussing fraudulent activities.
  • Social media: Social media platforms can be a rich data source for detecting potential fraud. By analyzing social media posts and activity, companies can identify individuals or groups that may be engaging in fraudulent activity and patterns of behavior or language that may be associated with fraud.
  • Invoices: Analyzing invoice data helps companies identify suspicious patterns, such as duplicate invoices or invoices from unknown vendors.
  • Compliance documentation: Compliance documentation, such as audit reports and certifications, can help identify potential fraud or noncompliance.

However, text data analysis alone is insufficient for comprehensive fraud detection. It is most effective in conjunction with other fraud detection methods, such as data analytics and human expertise.

Support third-party assessment program

An analyst can leverage ChatGPT for a range of activities in a third-party assessment program:

  • It can be trained to provide guidance on assessment criteria and assist third-party assessors in understanding specific assessment requirements. It can provide information on industry best practices for security and compliance, including frameworks such as SOC 2, Payment Card Industry Data Security Standard (PCI DSS) and Health Insurance Portability and Accountability Act (HIPAA).
  • The model can analyze assessment data and identify patterns or trends, like analysis of third-party survey responses or audit findings, to identify common themes or areas where the organization may need to improve its compliance program.
  • It can conduct risk assessments based on specific criteria, such as risk factors related to a particular industry or region. This can help third-party assessors identify potential risks and provide guidance on implementing security controls to mitigate them.
  • It can provide ongoing education and training for third-party assessors.

Conduct training and awareness

With ChatGPT, companies can provide analysts with a more engaging and personalized training experience tailored to their specific needs. The company could create a platform-based compliance training chatbot with which employees and analysts can interact using natural language. The chatbot could be trained on the company’s specific compliance policies, procedures and regulatory requirements. This can help improve employee engagement and retention of key compliance concepts while reducing the risk of non-compliance and regulatory violations.

There are many other use cases where GRC professionals can benefit from using ChatGPT. Though the platform provides a wealth of information on security and compliance topics, it is not a substitute for working with qualified professionals, such as auditors, security consultants and legal advisors. Also, the use of ChatGPT and training other OpenAI models come with data privacy risks. Security experts must properly evaluate and mitigate these to take full advantage of ChatGPT.

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…