{"id":888,"date":"2025-06-24T18:01:15","date_gmt":"2025-06-24T18:01:15","guid":{"rendered":"http:\/\/www.logicalware.net\/?p=888"},"modified":"2025-06-24T19:27:48","modified_gmt":"2025-06-24T19:27:48","slug":"anthropic-wins-ruling-on-ai-training-in-copyright-lawsuit-but-must-face-trial-on-pirated-books","status":"publish","type":"post","link":"http:\/\/www.logicalware.net\/index.php\/2025\/06\/24\/anthropic-wins-ruling-on-ai-training-in-copyright-lawsuit-but-must-face-trial-on-pirated-books\/","title":{"rendered":"Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books"},"content":{"rendered":"
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn\u2019t break the law by training its chatbot Claude on millions of copyrighted books.<\/p>\n
But the company is still on the hook and must now go to trial over how it acquired those books by downloading them from online \u201cshadow libraries\u201d of pirated copies.<\/p>\n
U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system’s distilling from thousands of written works to be able to produce its own passages of text qualified as \u201cfair use\u201d under U.S. copyright law because it was \u201cquintessentially transformative.\u201d<\/p>\n
\u201cLike any reader aspiring to be a writer, Anthropic\u2019s (AI large language models) trained upon works not to race ahead and replicate or supplant them \u2014 but to turn a hard corner and create something different,\u201d Alsup wrote.<\/p>\n
But while dismissing a key claim made by the group of authors who sued the company for copyright infringement last year, Alsup also said Anthropic must still go to trial in December over its alleged theft of their works.<\/p>\n
\u201cAnthropic had no entitlement to use pirated copies for its central library,\u201d Alsup wrote. <\/p>\n
A trio of writers \u2014 Andrea Bartz, Charles Graeber and Kirk Wallace Johnson \u2014 alleged in their lawsuit last summer that Anthropic’s practices amounted to \u201clarge-scale theft,” and that the company \u201cseeks to profit from strip-mining the human expression and ingenuity behind each one of those works.\u201d<\/p>\n
As the case proceeded over the past year in San Francisco’s federal court, documents disclosed in court showed Anthropic’s internal concerns about the legality of their use of online repositories of pirated works. So the company later shifted its approach and attempted to purchase copies of digitized books. <\/p>\n
\u201cThat Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,\u201d Alsup wrote. <\/p>\n
The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram. <\/p>\n
Anthropic \u2014 founded by ex-OpenAI leaders in 2021 \u2014 has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way.<\/p>\n
But the lawsuit filed last year alleged that Anthropic\u2019s actions \u201chave made a mockery of its lofty goals\u201d by tapping into repositories of pirated writings to build its AI product.<\/p>\n
Anthropic said Tuesday it was pleased that the judge recognized that AI training was transformative and consistent with \u201ccopyright\u2019s purpose in enabling creativity and fostering scientific progress.\u201d Its statement didn’t address the piracy claims. <\/p>\n
The authors’ attorneys declined comment. <\/p>\n","protected":false},"excerpt":{"rendered":"
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn\u2019t break the law by training its chatbot Claude on millions of<\/p>\n