ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.

ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees and the intellectual property you create, own and protect.

How is ChatGPT being used?

With over 100 million users, the applications for ChatGPT are legion. However, there are many real-world examples of how businesses are leveraging this app. IT companies are applying the app to software development, debugging, chatbots, data analysis and more. Service companies are streamlining sales, improving customer service and automating routine tasks. Government and public service sectors see benefits in creating draft language for laws and bills and creating content in multiple languages. And countless individuals are using the app as a personal productivity tool.

Of course, as with all innovations, thieves discover uses as well. Generative AI tools are being used in phishing attempts, making them faster to execute, harder to detect and easier to fall for. ChatGPT imitates real human conversation. That means the typos, odd phrasing and poor grammar that often alert users to phishing foul play may soon disappear. Fortunately, while generative AI can be used by criminals to create problems, cybersecurity pros can use ChatGPT to counter them.

Explore Watsonx

Pitfalls of ChatGPT and its intellectual property implications

OpenAI, the developer of ChatGPT, notes hazards of the generative AI app. They state that “…outputs may be inaccurate, untruthful and otherwise misleading at times” and that the tool, in their words, will “hallucinate” or simply invent outputs. Generative AI models improve as they learn from an ever larger language data set, but inaccuracy remains common. Any output generated by the app requires human fact-checking and quality control before use or distribution.

These inaccuracies can complicate your company’s IP rights. IP rights fall into four main categories: patents, trademarks, copyrights and trade secrets. If you claim IP rights to something even partially AI-generated, you need to ensure its accuracy first. To make matters muddier, one big question remains unresolved about AI-generated IP: ownership.

Who owns ChatGPT output? It’s complicated.

Per current ChatGPT terms of use, where permitted by law, you own the input (such as the prompts, questions or texts you enter when seeking output from the tool). Based on your input, the service delivers output. Collectively, the input and output are known as “content” per the terms of use. The terms state that OpenAI assigns to you all its rights, title and interest in and to the output.

However, OpenAI can’t assign rights to content it didn’t initially own. The use terms also state that the user is responsible for generated content. This includes ensuring it does not violate applicable laws or OpenAI’s terms of use. The terms also note that one user’s output may be exactly the same as another’s. For example, they use the query, “Why is the sky blue?” Two different users might ask that same question and the output could be the same for both.

Many issues revolve around the intersection of AI and intellectual property. A few have been decided, while others are not yet litigated and remain unresolved. Thaler v. Vidal decided the issue of patents in the U.S. In April 2023, the U.S. Supreme Court upheld that AI inventorship is not a thing and that patents can only be obtained by humans. However, Congress is now considering the issue and seeking guidance on how AI inventorship should be treated.

In March of 2023, the U.S. Copyright Office delivered guidance on registering copyright for works with AI-generated material. During the copyright application, the applicant must disclose if the material contains AI-generated content. The guidance also states that the applicant has to explain the human’s contributions to the work and that there must be sufficient human authorship established to ensure copyright protection for that part of the work.

What about user input? That’s complicated too.

AI language models use data to continuously improve their models. ChatGPT captures your chat history data to help train its model. Its model training could use your input. If you input confidential or proprietary information, that could put your company’s intellectual property at risk of theft or dissemination. Samsung discovered this the hard way when Samsung engineers accidentally leaked internal source code in an upload to ChatGPT. In response, the company has temporarily banned staff from using generative AI tools on company-owned devices.

Samsung isn’t alone. One data security service discovered and blocked requests to input confidential data into ChatGPT from 4.2% of 1.6 million workers at its client companies. The inputs included client data, source code and other proprietary and confidential information. One executive pasted corporate strategy into the app and requested the creation of a PowerPoint deck. In another incident, a doctor input a patient’s name and condition into the model to help write a letter to an insurance company. The fear is that this confidential data could resurface as output in response to the right query.

What can security teams do to safeguard IP?

Generative AI is a fast-moving target. Keeping your employees and confidential information secure takes vigilance. Review and update your security posture regularly. For now, here are some simple things you can do to safeguard your IP.

  • Opt out of model training. Turn off chat history and model training in ChatGPT data controls settings. OpenAI notes in their terms of use that disabling may limit the app’s functionality, but that may be a reasonable price to pay for IP safety.
  • Provide employee training. Tell staff how these models work and that their inputs could become public, harming the company, partners, customers, patients or other employees. Also, teach staff how generative AI improves phishing and vishing schemes to increase their vigilance for those types of attacks.
  • Review terms of use. ChatGPT terms of use update in response to issues that arise with users. Check the terms of use for this and other generative AI tools frequently to ensure you stay protected.
  • Follow relevant IP legal proceedings. Globally, there will be more laws and rulings about IP and its intersection with generative AI. Corporate legal teams need to follow court proceedings and keep security teams informed of how they might affect security guidelines and adherence to the law.
  • Use the least privilege principle. Give employees the least access and authorizations required to perform their jobs. This might help cut down on unauthorized access to information that can be shared with external AI tools.

The easy proliferation of generative AI has democratized and accelerated its adoption. This tech-led trend will drive disruption. Questions about intellectual property protection will arise from it. Learn more about how IBM helps you embrace the opportunities of generative AI while also protecting against the risks.

More from Artificial Intelligence

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today