Anthropic PBC told a pair of courts in July that a judge’s order approving class-action status for authors’ copyright lawsuit over millions of pirated books put “inordinate pressure” on the company to settle a case that could kill the company. A month later, the AI firm bowed to the pressure and settled.
In a joint filing Tuesday to a San Francisco federal court, Anthropic and the authors said they reached a class-wide settlement and asked to stay discovery and vacate deadlines while they finalize the deal. The settlement comes after Anthropic warned both the district court and an appeals court that the class’ potential pursuit of hundreds of billions of dollars in statutory damages created a “death knell” situation and would force an unfair settlement, regardless of the case’s legal merit.
Anthropic said it found itself facing existential financial pressure within weeks of Alsup’s decision, dubbing the case “possibly” the largest copyright class actions ever. Santa Clara Law Professor Edward Lee had estimated damages could top $900 billion if a jury found Anthropic’s infringement was willful, while the company’s own chief financial officer told the court that Anthropic expects to receive no more than $5 billion in revenue this year while operating at a loss of billions of dollars.
Judge William Alsup of the US District Court for the Northern District of California repeatedly rejected Anthropic’s attempts to avoid trial after his July order for class certification. In an Aug. 11 ruling he said the AI company “refused to come clean” about which pirated works it used to train its models.
“If Anthropic loses big it will be because what it did wrong was also big,” Alsup said. The authors were also pressing to limit Anthropic’s defenses at trial by trying to prohibit the company from claiming innocent, and less expensive, infringement.
The case stems from Anthropic’s downloading of over 7 million books from pirated “shadow libraries” LibGen and PiLiMi. The authors sued Anthropic in August 2024 for copyright infringement, accusing the company of illegally downloading the books to train its large language models. While Alsup ruled that training AI on copyright works is fair use, he left the piracy issue to a jury while certifying the class, which Anthropic called “unprecedented and erroneous.”
The company argued courts routinely disfavor class certification in copyright cases, but found itself backed into a corner when Alsup rejected those arguments and set a December trial date in San Francisco. It had asked both Alsup and the US Court of Appeals for the Ninth Circuit to delay that date before Tuesday’s filings.
“There’s almost no scenario in which the magnitude of potential statutory damages didn’t drive this settlement,” said Adam Eisgrau, senior director of AI, creativity, and copyright policy at Chamber of Progress, a trade group supported by companies including Google and Midjourney. The settlement appeared to be “under duress” and is “proof positive that maximum statutory damages under the Copyright Act are excessive and potential innovation killers,” he said.
The authors’ counsel, Justin Nelson of Susman Godfrey LLP, called the settlement “historic.” Anthropic declined to comment.
The parties expect to finalize the settlement agreement by Sept. 3, with motions for preliminary approval due by Sept. 5.
The settlement “sends a strong signal to AI companies that they can’t pirate works of authorship instead of engaging legally,” said Maria A. Pallante, president and CEO of the Association of American Publishers. “AI can’t exist without using human authorship,” she said.
The case is one of several copyright actions brought against AI developers in courts around the country. Anthropic had appealed Alsup’s decision to certify the class and sought an emergency stay before potential class members were contacted.
Anthropic isn’t completely clear of copyright claims, as lawsuits brought by music publishers and Reddit Inc. remain pending in the same California district court. Publishers including Universal Music Corp. and Concord Music Group Inc. sued Anthropic in 2023 alleging the AI company engaged in wholesale copying of protected lyrics to train its signature large language model, Claude. Reddit’s lawsuit claims Anthropic used the social media company’s content without permission to train its AI models.
Anthropic’s decision to settle its dispute with authors could have turned on the company’s concern “about what ‘willfulness’ means” while facing damages worth up to $150,000 for each intentional infringement, said Bhamati Viswanathan of New England Law. The damages owed per work can drop down to $200 if the infringement was “innocent” under copyright law. With a pending Supreme Court hearing in Sony Music Entertainment v. Cox Communications that deals with the issue of willfulness, “getting out in front of it” may have made more sense for Anthropic, Viswanathan said.
The AI industry’s demand for massive amounts of data warrants revisions to the way statutory damages are structured under copyright law, Eisgrau said. As “impressive” as Anthropic’s valuation may be—it is soon to be valued at $170 billion—those numbers don’t indicate “cash on hand” but “marketplace prognostication,” he added.
“The world has changed, the technology has changed, and the multiplier effect of class actions has made this important to focus on,” Eisgrau added.
Anthropic is represented by Cooley LLP, Arnold & Porter Kaye Scholer LLP, Latham & Watkins LLP, Lex Lumina LLP, and Morrison & Foerster LLP. Lieff Cabraser Heimann & Bernstein LLP, Cowan Debaets Abrahams & Sheppard LLP, Edelson PC, and Oppenheim + Zebrak LLP also represent the authors.
The cases are:
- Bartz v. Anthropic PBC, 9th Cir., No. 25-4843, motion to hold appeal filed 8/26/25.
- Bartz v. Anthropic PBC, N.D. Cal., 24-cv-5417, Stipulation with proposed order 8/26/25.
To contact the reporters on this story:
To contact the editors responsible for this story: