- AI sparks copyright, job replacement concerns for artists
- Lawmakers propose ideas as courts consider the technology
In 1908, the Supreme Court ruled that “piano rolls,” a new and increasingly popular technology that automatically played songs without a human musician, didn’t violate copyright law—a blow to the music industry. Songwriters and composers felt their livelihoods threatened but had no grounds to demand royalties from piano-roll makers that copied their sheet music.
That was until Congress stepped in the following year, amending the law to address piano rolls and ensure musicians were paid royalties.
More than a century later, musicians—along with other creatives—are yet again fearful an emerging technology could disrupt their industries: artificial intelligence.
Copyright holders ranging from newspapers to novelists to artists have brought the top AI companies, including
Judges have shown some skepticism to the creatives’ plight in early rulings, but the litigation is just beginning and the number of lawsuits is growing. The New York Times’ recent lawsuit against OpenAI, for example, cited dozens of instances where ChatGPT produced near verbatim copies of Times articles, and dozens of prominent authors have joined proposed class actions in California and New York federal courts.
Those cases could take years to resolve completely. In the meantime, creatives are calling on Congress to protect them sooner as they compete with powerful AI tools.
“I don’t think Congress should just step aside and let the courts work things out. There’s more at stake here than just copyright law,” said Michael Huppe, president and CEO of SoundExchange, a collective-rights management company for musicians. “The courts are not the venue where you want to figure out what type of standards need to be put in place.”
Lawmakers are considering several ideas to tackle the problem, but the balance is tricky. Give AI companies an exemption from copyright liability, and the creative industries could suffer. Yet if the AI companies are forced to pay billions in damages and shut down their models, the US risks its booming AI development falling behind that of China, Japan, Israel, and other countries with friendlier regulatory environments.
Senate Majority Leader
“But we have to do that and still maintain America’s lead—can’t slow it down,” Schumer added.
Fill in the Gaps
Authors, artists, and other copyright holders have accused AI companies of stealing their work. They allege AI developers have trained their products on copyrighted materials without permission or compensation.
AI companies facing a barrage of legal challenges largely maintain that their conduct falls under copyright’s “fair use” doctrine, which allows unauthorized copying under certain circumstances. They contend the AI training process is like a human learning to paint or read by analyzing a trove of art and novels.
The companies argue courts have decades of carefully crafted legal precedent to guide them through the fair use question and congressional intervention would upset that norm. Some examples include a case involving Google scanning millions of books for its books database project and a case about reverse engineering video game consoles, both of which were found to be fair use.
Still, many tech companies concede that Congress should pass some regulations to fill in any gaps in copyright law that AI has exposed.
“By and large, copyright law has worked well over more than a century, even through previous technology transitions. From analog to digital, from theaters to streaming, copyright law’s remained basically the same,” Christopher Padilla, vice president of government and regulatory affairs at
Several lawmakers agree with the approach, dismissing an overhaul of copyright rules and advocating for building on existing law.
“My suspicion, as is that in most use cases, the concept of fair use will continue to apply and we’ll have to clarify what the law is in some other instances,” Sen.
To tackle one gap, artists and policy experts have urged Congress to require transparency around AI training methods and uses to protect copyright holders. Navrina Singh, founder and chief executive officer of AI governance company Credo AI, has pushed for companies to adopt transparency and disclosure reporting across “the entire AI lifecycle.”
It’s “really important for Congress” to mandate such reporting and incentivize companies to “actually embrace transparency, rather than just running in the wild in innovation,” Singh said.
A recent bill (
Image and Likeness
Hollywood movie stars and Grammy award-winning musicians have also been victims of AI-generated deepfakes. The latest example of such disinformation involved fake, sexually explicit content of Taylor Swift proliferating across social media.
Yet deepfakes used to replicate an artist, to sing a song or act like them, represent another issue that current copyright law doesn’t cover, creatives argue. Viral AI-generated songs, such as one last year that cloned the voices of popular musicians Drake and The Weeknd, have triggered industry panic.
Understanding Deepfakes and the Taylor Swift Images: QuickTake
“It should be prohibited for people to just capitalize on that without the artist’s permission or without a licensing agreement,” Huppe of SoundExchange said. “Congress needs to affirmatively legislate on that issue.”
Powerful deepfake technology has also sparked job replacement nightmares for artists.
“If you think about the concept of robots taking factory jobs, this is a little different. This is a replica of you taking your job—it’s far more egregious, it’s far more dangerous,” said Jeffrey Bennett, general counsel at SAG-AFTRA, the union for actors and performers. “No one should have to compete with a digital version of themselves for work.”
SAG-AFTRA negotiated protections for voice and likeness in a contract with Hollywood studios that ended a months-long strike, but the safeguards aren’t enshrined in federal law. The unauthorized use of someone’s image, voice, or likeness could be challenged under different statutes, but federal standards would ensure protections, artists say.
Notably, late actors Carrie Fisher and Paul Walker had digital copies of them featured in movies from the blockbuster franchises “Star Wars” and “Fast and Furious,” respectively. The practice is still rare, and studios tend to get authorization from actors or their estates, as was the case for both Fisher and Walker.
But without federal protections advocates say the use of generative AI to replace artists, such as voice actors in animated films and video games, could become pervasive.
Creatives—and some tech officials—say a recent measure proposed by a bipartisan group of senators is a step in the right direction. The No FAKES Act would create a federal “right of publicity,” holding individuals or companies liable for AI-generated works that replicate an artist’s image, voice, or likeness without their consent. A bipartisan group in the House introduced the similar No AI FRAUD Act, which would give individuals the right to control their identifying characteristics, such as their voice.
Sen.
“We’re making steady progress,” said Coons, who chairs the Senate Judiciary’s intellectual property subcommittee.
A Race to Regulate
With artists pushing both courts and policymakers to address AI copyright questions, the two branches are racing to provide answers.
If Congress waits for the courts, lawmakers could end up overreacting to a future ruling in favor of the AI companies by changing copyright law and shaking up decades of precedent, said Dana Rao, general counsel of
“We’re saying to Congress: think ahead,” Rao said. “Create this ‘style right’ now. Give them the right they need, instead of trying to force fit copyright into something that isn’t probably going to protect them.”
On the other hand, if the courts find the AI industry has violated the law, companies such as OpenAI could be liable for hundreds of billions of dollars in damages and may need to scrap their models altogether, an enormous blow to the burgeoning industry that policymakers are trying to promote.
Matthew Sag, a law professor at Emory University who researches AI, said he believes that nightmare scenario for AI companies is far-fetched, given the consequences for the country’s competitiveness and existing legal precedents.
“We’re unlikely to see a ruling that says none of this is fair use under any circumstances,” Sag said. “There’s already a roadmap towards a slightly more conservative, less aggressive way of doing this.”
To contact the reporters on this story:
To contact the editors responsible for this story:


