ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.

ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees and the intellectual property you create, own and protect.

How is ChatGPT being used?

With over 100 million users, the applications for ChatGPT are legion. However, there are many real-world examples of how businesses are leveraging this app. IT companies are applying the app to software development, debugging, chatbots, data analysis and more. Service companies are streamlining sales, improving customer service and automating routine tasks. Government and public service sectors see benefits in creating draft language for laws and bills and creating content in multiple languages. And countless individuals are using the app as a personal productivity tool.

Of course, as with all innovations, thieves discover uses as well. Generative AI tools are being used in phishing attempts, making them faster to execute, harder to detect and easier to fall for. ChatGPT imitates real human conversation. That means the typos, odd phrasing and poor grammar that often alert users to phishing foul play may soon disappear. Fortunately, while generative AI can be used by criminals to create problems, cybersecurity pros can use ChatGPT to counter them.

Explore Watsonx

Pitfalls of ChatGPT and its intellectual property implications

OpenAI, the developer of ChatGPT, notes hazards of the generative AI app. They state that “…outputs may be inaccurate, untruthful and otherwise misleading at times” and that the tool, in their words, will “hallucinate” or simply invent outputs. Generative AI models improve as they learn from an ever larger language data set, but inaccuracy remains common. Any output generated by the app requires human fact-checking and quality control before use or distribution.

These inaccuracies can complicate your company’s IP rights. IP rights fall into four main categories: patents, trademarks, copyrights and trade secrets. If you claim IP rights to something even partially AI-generated, you need to ensure its accuracy first. To make matters muddier, one big question remains unresolved about AI-generated IP: ownership.

Who owns ChatGPT output? It’s complicated.

Per current ChatGPT terms of use, where permitted by law, you own the input (such as the prompts, questions or texts you enter when seeking output from the tool). Based on your input, the service delivers output. Collectively, the input and output are known as “content” per the terms of use. The terms state that OpenAI assigns to you all its rights, title and interest in and to the output.

However, OpenAI can’t assign rights to content it didn’t initially own. The use terms also state that the user is responsible for generated content. This includes ensuring it does not violate applicable laws or OpenAI’s terms of use. The terms also note that one user’s output may be exactly the same as another’s. For example, they use the query, “Why is the sky blue?” Two different users might ask that same question and the output could be the same for both.

Many issues revolve around the intersection of AI and intellectual property. A few have been decided, while others are not yet litigated and remain unresolved. Thaler v. Vidal decided the issue of patents in the U.S. In April 2023, the U.S. Supreme Court upheld that AI inventorship is not a thing and that patents can only be obtained by humans. However, Congress is now considering the issue and seeking guidance on how AI inventorship should be treated.

In March of 2023, the U.S. Copyright Office delivered guidance on registering copyright for works with AI-generated material. During the copyright application, the applicant must disclose if the material contains AI-generated content. The guidance also states that the applicant has to explain the human’s contributions to the work and that there must be sufficient human authorship established to ensure copyright protection for that part of the work.

What about user input? That’s complicated too.

AI language models use data to continuously improve their models. ChatGPT captures your chat history data to help train its model. Its model training could use your input. If you input confidential or proprietary information, that could put your company’s intellectual property at risk of theft or dissemination. Samsung discovered this the hard way when Samsung engineers accidentally leaked internal source code in an upload to ChatGPT. In response, the company has temporarily banned staff from using generative AI tools on company-owned devices.

Samsung isn’t alone. One data security service discovered and blocked requests to input confidential data into ChatGPT from 4.2% of 1.6 million workers at its client companies. The inputs included client data, source code and other proprietary and confidential information. One executive pasted corporate strategy into the app and requested the creation of a PowerPoint deck. In another incident, a doctor input a patient’s name and condition into the model to help write a letter to an insurance company. The fear is that this confidential data could resurface as output in response to the right query.

What can security teams do to safeguard IP?

Generative AI is a fast-moving target. Keeping your employees and confidential information secure takes vigilance. Review and update your security posture regularly. For now, here are some simple things you can do to safeguard your IP.

  • Opt out of model training. Turn off chat history and model training in ChatGPT data controls settings. OpenAI notes in their terms of use that disabling may limit the app’s functionality, but that may be a reasonable price to pay for IP safety.
  • Provide employee training. Tell staff how these models work and that their inputs could become public, harming the company, partners, customers, patients or other employees. Also, teach staff how generative AI improves phishing and vishing schemes to increase their vigilance for those types of attacks.
  • Review terms of use. ChatGPT terms of use update in response to issues that arise with users. Check the terms of use for this and other generative AI tools frequently to ensure you stay protected.
  • Follow relevant IP legal proceedings. Globally, there will be more laws and rulings about IP and its intersection with generative AI. Corporate legal teams need to follow court proceedings and keep security teams informed of how they might affect security guidelines and adherence to the law.
  • Use the least privilege principle. Give employees the least access and authorizations required to perform their jobs. This might help cut down on unauthorized access to information that can be shared with external AI tools.

The easy proliferation of generative AI has democratized and accelerated its adoption. This tech-led trend will drive disruption. Questions about intellectual property protection will arise from it. Learn more about how IBM helps you embrace the opportunities of generative AI while also protecting against the risks.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today