May 13, 2024 By Jonathan Reed 3 min read

The Digital Millennium Copyright Act (DMCA) is a federal law that protects copyright holders from online theft. The DMCA covers music, movies, text and anything else under copyright.

The DMCA also makes it illegal to hack technologies that copyright owners use to protect their works against infringement. These technologies can include encryption, password protection or other measures. These provisions are commonly referred to as the “Anti-Circumvention” provisions or “Section 1201”.

Now, a fierce debate is brewing over whether to allow independent hackers to legally circumvent Section 1201 restrictions to probe AI models. The goal of this legal hacking activity would be to detect problems like bias and discrimination.

Proponents of this exemption claim that it would boost transparency and trust in generative AI. Opponents, largely made up of media and entertainment companies, are interested in data privacy protection. And they fear the exemption could enable piracy.

The debate has just begun, and each side is presenting compelling arguments. The U.S. Copyright Office has opened the debate by receiving comments in opposition to the Section 1201 Exemption. Likewise, proponents have been given the opportunity to reply. And the final decision surrounding this AI cybersecurity issue has yet to be determined.

Opponents worry about privacy and protection

Opponents of the Section 1201 Exemption say that supporters have failed to meet their burden of proof. “As an initial matter, Proponents do not identify what technological protection measures (“TPMs”), if any, currently exist on generative AI tools or models. This failure alone leads to the conclusion that the request for the proposed exemption should be denied.”

Those opposed to the exemption also say it is too broad and based on a “sparse, undeveloped record.” Opponents also urge the Copyright Office to reject “belated attempts through the proposal to secure an expansion of the security research exemption to include generative AI models.”

Learn more about generative AI

Supporters worry about AI bias

Section 1201 Exemption supporters, like the Hacking Policy Council, say that the proposed exemption would only “apply to a particular class of works: computer programs, which are a subcategory of literary works. The proposed exemption would apply to a specific set of users: persons performing good faith research, as defined, under certain conditions. These are the same parameters that the Copyright Office uses to describe other classes of works and sets of users in existing exemptions.”

Supporters also say that they support “the petition to protect independent testing of AI for bias and alignment (“trustworthiness”) because we believe such testing is crucial to identifying and fixing algorithmic flaws to prevent harm or disruption.”

The bigger picture

Generative AI is artificial intelligence (AI) that can create original content — such as text, images, video, audio or software code — in response to a user’s prompt or request.

Recently, the world has witnessed an unprecedented surge of AI innovation and adoption. Generative AI offers enormous productivity benefits for individuals and organizations but presents very real challenges and risks. All this has led to a flurry of conversations surrounding how to regulate generative AI, and the Section 1201 Exemption is but one example.

The debate is occurring on a global scale, such as with the EU AI Act, which aims to be the world’s first comprehensive regulatory framework for AI applications. The Act completely bans some AI uses while implementing strict safety and transparency standards for others. Penalties for noncompliance can reach EUR 35,000,000 or 7% of a company’s annual worldwide revenue, whichever is higher.

Nobody knows who will win these arguments over AI security issues. But the future use and limits of generative AI hang in the balance.

More from News

CISA launches portal to simplify cyber incident reporting

2 min read - Information sharing just got more efficient. In August, the Cybersecurity and Infrastructure Security Agency (CISA) launched the CISA Services Portal. “The new CISA Services Portal improves the reporting process and offers more features for our voluntary reporters. We ask organizations reporting an incident to provide information on the impacted entity, contact information, description of the incident, technical indications and steps taken,” a CISA spokesperson said in an email statement. “Reported incidents enable CISA and our partners to help victims mitigate…

FYSA – Critical RCE Flaw in GNU-Linux Systems

2 min read - Summary The first of a series of blog posts has been published detailing a vulnerability in the Common Unix Printing System (CUPS), which purportedly allows attackers to gain remote access to UNIX-based systems. The vulnerability, which affects various UNIX-based operating systems, can be exploited by sending a specially crafted HTTP request to the CUPS service. Threat Topography Threat Type: Remote code execution vulnerability in CUPS service Industries Impacted: UNIX-based systems across various industries, including but not limited to, finance, healthcare,…

Are new gen AI tools putting your business at additional risk?

3 min read - If you're wondering whether new generative artificial intelligence (gen AI) tools are putting your business at risk, the answer is: Probably. Even more so with the increased use of AI tools in the workplace. A recent Deloitte study found more than 60% of knowledge workers use AI tools at work. While the tools bring many benefits, especially improved productivity, experts agree they add more risk. According to the NSA Cybersecurity Director Dave Luber, AI brings unprecedented opportunities while also presenting…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today