ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.

ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees and the intellectual property you create, own and protect.

How is ChatGPT being used?

With over 100 million users, the applications for ChatGPT are legion. However, there are many real-world examples of how businesses are leveraging this app. IT companies are applying the app to software development, debugging, chatbots, data analysis and more. Service companies are streamlining sales, improving customer service and automating routine tasks. Government and public service sectors see benefits in creating draft language for laws and bills and creating content in multiple languages. And countless individuals are using the app as a personal productivity tool.

Of course, as with all innovations, thieves discover uses as well. Generative AI tools are being used in phishing attempts, making them faster to execute, harder to detect and easier to fall for. ChatGPT imitates real human conversation. That means the typos, odd phrasing and poor grammar that often alert users to phishing foul play may soon disappear. Fortunately, while generative AI can be used by criminals to create problems, cybersecurity pros can use ChatGPT to counter them.

Explore Watsonx

Pitfalls of ChatGPT and its intellectual property implications

OpenAI, the developer of ChatGPT, notes hazards of the generative AI app. They state that “…outputs may be inaccurate, untruthful and otherwise misleading at times” and that the tool, in their words, will “hallucinate” or simply invent outputs. Generative AI models improve as they learn from an ever larger language data set, but inaccuracy remains common. Any output generated by the app requires human fact-checking and quality control before use or distribution.

These inaccuracies can complicate your company’s IP rights. IP rights fall into four main categories: patents, trademarks, copyrights and trade secrets. If you claim IP rights to something even partially AI-generated, you need to ensure its accuracy first. To make matters muddier, one big question remains unresolved about AI-generated IP: ownership.

Who owns ChatGPT output? It’s complicated.

Per current ChatGPT terms of use, where permitted by law, you own the input (such as the prompts, questions or texts you enter when seeking output from the tool). Based on your input, the service delivers output. Collectively, the input and output are known as “content” per the terms of use. The terms state that OpenAI assigns to you all its rights, title and interest in and to the output.

However, OpenAI can’t assign rights to content it didn’t initially own. The use terms also state that the user is responsible for generated content. This includes ensuring it does not violate applicable laws or OpenAI’s terms of use. The terms also note that one user’s output may be exactly the same as another’s. For example, they use the query, “Why is the sky blue?” Two different users might ask that same question and the output could be the same for both.

Many issues revolve around the intersection of AI and intellectual property. A few have been decided, while others are not yet litigated and remain unresolved. Thaler v. Vidal decided the issue of patents in the U.S. In April 2023, the U.S. Supreme Court upheld that AI inventorship is not a thing and that patents can only be obtained by humans. However, Congress is now considering the issue and seeking guidance on how AI inventorship should be treated.

In March of 2023, the U.S. Copyright Office delivered guidance on registering copyright for works with AI-generated material. During the copyright application, the applicant must disclose if the material contains AI-generated content. The guidance also states that the applicant has to explain the human’s contributions to the work and that there must be sufficient human authorship established to ensure copyright protection for that part of the work.

What about user input? That’s complicated too.

AI language models use data to continuously improve their models. ChatGPT captures your chat history data to help train its model. Its model training could use your input. If you input confidential or proprietary information, that could put your company’s intellectual property at risk of theft or dissemination. Samsung discovered this the hard way when Samsung engineers accidentally leaked internal source code in an upload to ChatGPT. In response, the company has temporarily banned staff from using generative AI tools on company-owned devices.

Samsung isn’t alone. One data security service discovered and blocked requests to input confidential data into ChatGPT from 4.2% of 1.6 million workers at its client companies. The inputs included client data, source code and other proprietary and confidential information. One executive pasted corporate strategy into the app and requested the creation of a PowerPoint deck. In another incident, a doctor input a patient’s name and condition into the model to help write a letter to an insurance company. The fear is that this confidential data could resurface as output in response to the right query.

What can security teams do to safeguard IP?

Generative AI is a fast-moving target. Keeping your employees and confidential information secure takes vigilance. Review and update your security posture regularly. For now, here are some simple things you can do to safeguard your IP.

  • Opt out of model training. Turn off chat history and model training in ChatGPT data controls settings. OpenAI notes in their terms of use that disabling may limit the app’s functionality, but that may be a reasonable price to pay for IP safety.
  • Provide employee training. Tell staff how these models work and that their inputs could become public, harming the company, partners, customers, patients or other employees. Also, teach staff how generative AI improves phishing and vishing schemes to increase their vigilance for those types of attacks.
  • Review terms of use. ChatGPT terms of use update in response to issues that arise with users. Check the terms of use for this and other generative AI tools frequently to ensure you stay protected.
  • Follow relevant IP legal proceedings. Globally, there will be more laws and rulings about IP and its intersection with generative AI. Corporate legal teams need to follow court proceedings and keep security teams informed of how they might affect security guidelines and adherence to the law.
  • Use the least privilege principle. Give employees the least access and authorizations required to perform their jobs. This might help cut down on unauthorized access to information that can be shared with external AI tools.

The easy proliferation of generative AI has democratized and accelerated its adoption. This tech-led trend will drive disruption. Questions about intellectual property protection will arise from it. Learn more about how IBM helps you embrace the opportunities of generative AI while also protecting against the risks.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today