The AI industry would really like writers to take $3,000. For the approximately 500,000 books whose copyrights were allegedly violated when the company downloaded millions of files from pirate libraries to train its Claude AI model, the proposed $1.5 billion Anthropic settlement amounts to that amount per book. For some writers and publishers, $3,000 was a significant milestone because it was the first real recognition that using pirated books to create a multibillion-dollar AI company was not, in fact, free. Others perceived it as precisely the kind of arrangement that favors the business over the individuals it harmed. The second group includes John Carreyrou.
In December 2025, Carreyrou, the two-time Pulitzer Prize winner who wrote Bad Blood and broke the Theranos story, filed a lawsuit in the Northern District of California with five other authors against six of the world’s most powerful AI companies: Anthropic, OpenAI, Google, Meta, xAI, and Perplexity. According to the complaint, all six businesses trained their large language models using pirated books from LibGen, Z-Library, and OceanofPDF, shadow libraries that have been operating in a legal limbo for years and gathering copyrighted content without permission. The writers don’t want $3,000 for each book. They want $150,000 for each defendant’s work, which adds up to $900,000 for each book when spread across six businesses. Each author is seeking their own jury trial for their own works because the lawsuit is organized as individual actions rather than a class.
The ruling that initially enabled the Anthropic settlement serves as the foundation for the new complaint’s legal framework. In the earlier Bartz case, Judge William Alsup of the Northern District of California rendered an important ruling in June 2025, drawing a line that the AI industry had hoped courts would leave unclear. He discovered that, under some conditions, using legally obtained books to train AI could be considered fair use. However, he also discovered that Anthropic’s downloading of millions of files from LibGen and Pirate Library Mirror was a distinct and actionable violation, and that training on pirated copies did not fall under fair use. The new plaintiffs are building on that decision, which rejected Anthropic’s motion for summary judgment on the piracy issue and put the case on the path to settlement. They make the straightforward claim that even though you were aware that the copies were stolen, you still used them to create systems that are currently valued at hundreds of billions of dollars, and $3,000 per book does not constitute willful infringement under the law.
Key Information: Authors v. AI Companies — New Lawsuit (December 2025)
| Field | Details |
|---|---|
| Case Filed | December 22, 2025 — Northern District of California |
| Lead Plaintiff | John Carreyrou — two-time Pulitzer Prize winner, author of Bad Blood |
| Other Plaintiffs | Lisa Barretta, Philip Shishkin, Jane Adams, Matthew Sacks, Michael Kochin |
| Defendants | Anthropic, OpenAI, Google, Meta, xAI, Perplexity AI |
| Core Allegation | AI companies used books pirated from LibGen, Z-Library, and OceanofPDF to train large language models without permission or compensation |
| Damages Sought | $150,000 statutory damages per work per defendant — up to $900,000 per work total |
| Previous Settlement Rejected | Bartz v. Anthropic — $1.5 billion settlement offering ~$3,000 per title (approximately 500,000 titles) |
| Settlement Critique | $3,000 represents just 2% of the Copyright Act’s $150,000 statutory ceiling |
| Plaintiff Legal Team | Stris & Maher LLP; Freedman Normand Friedland LLP; 35% contingency fee arrangement |
| Key Precedent | Judge William Alsup (June 2025) — AI training from legally acquired books = fair use; training from pirated books = not fair use |
| Related Action | Music publishers seeking $3 billion+ from Anthropic for lyric piracy (January 2026) |
| Industry Parallel | Legal experts calling this the “Napster moment” for the AI industry |

The language used in the complaint regarding the value of those books to the companies is pointed. According to the filing, high-quality books are the “gold standard” of training data; they are not secondary sources or supplemental material, but rather the foundation of what makes large language models coherent, fluent, and commercially viable. That fluency, at least in large part, was developed by the AI firms that currently dominate headlines and command trillion-dollar valuations thanks to the collective efforts of writers who were never asked or compensated. That is presented in the new lawsuit as more than just a copyright technicality. It presents it as a basic business transaction that took place without the other party’s knowledge or approval.
In the meantime, the Anthropic settlement is moving forward independently. Payment disbursement is anticipated to start sometime in June of 2026, subject to court approval, and the final approval hearing is set for May of that year. Through 2027, the settlement fund will be distributed in installments. While recognizing that authors who chose to opt out, as these plaintiffs did, are free to pursue their own claims, the Authors Guild has praised the outcome as outstanding. The lawyers who filed the December lawsuit are unrelated to that controversy and have their own history of large-scale litigation, but Alsup, the judge overseeing the settlement, has been far less forgiving in his evaluation of the entity encouraging authors to reject it—he called the operation’s communications “a fraud of immense proportions” and ordered changes to its website.
As this develops, it seems like the AI copyright disputes are moving into a stage that the earlier, exploratory cases were only getting ready for. In January 2026, music publishers sued Anthropic, demanding over $3 billion for lyric piracy. OpenAI is still being sued by the New York Times. Existing class actions are being joined by publisher coalitions. Plaintiffs in related cases, such as the music publishers, have been able to use the evidence that surfaced during the Bartz case, which showed that Anthropic had torrented five million files from LibGen, two million from Pirate Library Mirror, and nearly 200,000 from Books3. They found that LibGen contained songbooks and sheet music that were directly related to their claims. The cross-pollination of the legal ecosystem surrounding AI training data increases the visibility of each company mentioned in each case.
The $150,000-per-work argument at scale and whether the plaintiffs will benefit more from the strategic choice to file individual actions rather than pursue class membership are still up in the air. It is evident that the industry will find it difficult to classify a two-time Pulitzer winner declining a $1.5 billion settlement because it amounts to 2% of what the law allows. The informal, scale-first, permission-later approach to content acquisition collides head-on with the cumulative power of copyright law at what legal experts have dubbed the AI industry’s Napster reckoning. That force caused Napster to collapse. For AI, it is still genuinely unclear what will take its place.
