A group of authors, including Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, has filed a class action lawsuit against Anthropic, claiming the company infringed on their copyrights by using their books to train its AI models. The authors allege that Anthropic illegally downloaded pirated versions of their works and used these copies to train its large language model, Claude, without permission or compensation. The complaint asserts that Anthropic built its multibillion-dollar business by using thousands of copyrighted books without paying the authors.
The lawsuit highlights the practice of using unauthorized copies of copyrighted books to train AI models, which the authors argue is a form of theft. The plaintiffs contend that Anthropic did not purchase the works legally or seek the authors’ consent, leaving the authors with little to no compensation for their intellectual property. They also claim that Anthropic lacked transparency about the data used to train its models, leaving authors in the dark about how their works were used.
The class action suit seeks unspecified financial damages, including compensation for the authors and the legal costs involved in the case. The plaintiffs are also requesting a jury trial to determine whether the court should force Anthropic to change its practices in the future, potentially preventing further copyright infringement. This lawsuit represents the interests of over 1,000 authors and copyright owners who believe their works were unlawfully used in AI training.
Anthropic is one of several companies facing legal challenges related to the use of copyrighted content for training AI models. Earlier in 2024, The New York Times filed a lawsuit against OpenAI, accusing the company of similar copyright violations. OpenAI has since reached agreements with publishers to legally access their content and avoid further copyright disputes.
Reference: