Anthropic agrees to pay $1.5 billion to settle AI copyright lawsuit involving chatbot Claude
As artificial intelligence (AI) advances, questions continue to arise about how large language models (LLMs) are trained, whether data collection is legal, and the legal responsibilities of AI companies. Last year, a group of authors sued Anthropic for allegedly using pirated copies of their work to train its chatbot Claude. Now, the case has taken a major turn as Anthropic has agreed to pay $1.5 billion to settle.
According to Reuters, this is considered the largest copyright settlement ever in the US, and also the first lawsuit directly related to artificial intelligence training. The agreement is still awaiting court approval at a hearing scheduled for September 8, 2025.
The class action lawsuit alleges that Anthropic used hundreds of thousands of copyrighted works through illegal downloads, rather than through legitimate licensing sources. According to The New York Times, as many as 500,000 authors are involved in the lawsuit, and each work is expected to receive about $3,000 in damages. In addition to the settlement, Anthropic also pledged to remove all of the illegally obtained data from its training set so that it will not be reused in the future.
This case is considered a turning point in defining the legal boundaries for the field of AI, especially related to the concept of 'fair use' when training models. Using copyrighted books and documents is not against the law, but exploiting pirated copies is considered illegal.
From the perspective of authors and publishers, this is a historic victory, protecting creative work from unauthorized exploitation. For Anthropic, settling rather than dragging out the litigation strengthens its legal position and sets a precedent for how AI companies should handle similar lawsuits in the future.