OpenAI is touting a new plan to protect creator works—here’s why it won’t actually resolve AI’s copyright crisis

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Yesterday, OpenAI announced in a blog post it is developing a “Media Manager” that will allow artists, creators, and content owners to claim ownership of their works and specify whether or not they want them to be part of training OpenAI’s models. Creatives will ultimately be able to opt out of having their written or visual work included in future AI training datasets—when the tool is released by 2025. 

There was immediate backlash on social media: One musician posted on X saying the opt-out offer neglects to focus on OpenAI’s past scraping of copyrighted content, “like a burglar telling the victims to send in a form requesting which items in their house they shouldn’t steal…after they’ve already been burgled.” Another artist also pushed back against the idea of an “opt-out” option, saying that the answer is “opt in, with coherent licensing agreements or simply stop propping up your theft as a business.” 

A long-running debate on ‘fair use’ of copyrighted data

The truth is, OpenAI’s new plan to protect creator data does nothing to resolve the long-running debate about whether tools like ChatGPT and DALL-E, which were trained on vast swaths of scraped web content, including copyrighted data, broke copyright laws. Most of the various copyright-related lawsuits against OpenAI that have piled up over the past 18 months are still ongoing, including the most recent from eight major newspapers, which accuses OpenAI and Microsoft of using their copyrighted data to train AI models without compensation. 

In fact, OpenAI makes its own defense case clear in the blog post’s second paragraph: “While we believe legal precedents and sound public policy make learning fair use, we also feel that it’s important we contribute to the development of a broadly beneficial social contract for content in the AI age.” 

The legal issue of “fair use” of data is at the core of today’s AI copyright wars—a battle which some experts believe could ultimately end up before the Supreme Court. Today’s U.S. copyright law “permits limited use of copyrighted material without having to first acquire permission from the copyright holder.” But “fair use” is no simple phrase: Deciding on it is based on a four-factor test that judges consider when evaluating whether a work is “transformative” or simply a copy: the purpose and character of the work, the nature of the work, the amount taken from the original work, and the effect of the new work on a potential market. The fourth factor is particularly key for generative AI, since creators claim AI models can negatively impact the commercial value of the original work or impede opportunities for the copyright holder to exploit their work in the market. 

New OpenAI tool is meant to comply with EU AI Act standards

However, none of that is what OpenAI is addressing with the new Media Manager tool it is developing. Instead, it is simply complying with the standards laid out in the European Union’s recently approved, soon-to-be-implemented AI Act, according to Aviya Skowron, head of policy and ethics at Eleuther AI. 

“This is due to the EU AI Act and the Text and Data Mining (TDM) exception in existing EU copyright law,” Skowron posted on X yesterday. They added that OpenAI’s blog post announcements make sense in the current regulatory landscape: The EU does not have a unified “fair use” doctrine as the U.S. does, they explained, which just complicates matters further by creating competing legal doctrines. 

The bottom line? Legal battles over AI training data and copyright won’t be slowing down anytime soon. And there’s no doubt that OpenAI is fully prepared to continue defending itself against those who claim “fair use” did not apply to its use of copyrighted works in training datasets. The Media Manager is not about past use of data—it’s about the future. And that means complying with the EU AI Act. 

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.