26 September 2025
This month we’ve been keeping a very close eye on the US Anthropic case.
If you don’t have time to read this full article, here’s my comment to the trade press:
“We were delighted with the Court’s decision yesterday evening. Although this settlement amount is spare change to Big Tech, and we do not necessarily agree with the author share, this, at least, represents a win for authors and rightsholders as a step in the right direction towards compensation for the unlawful use of copyright-protected works by Anthropic and other AI companies to train their AI models.
We will examine the details when the class attorney sends them to us next week, and we will update members about what this means for authors, illustrators and literary translators in the UK who have had their works stolen by Anthropic, as well as what precedent this might set for the wider industry.
Wherever we land on this, it is just the start. This settlement signifies that Big Tech is not above the law and creators’ rights cannot be ignored. The rule of law must be upheld.“
Yesterday, 25 September, US District Judge William Alsup preliminarily approved a landmark $1.5bn settlement of this copyright class action brought by a group of US authors against Anthropic over the use of their works in the training of Anthropic’s large language model, Claude.
The landmark deal is the first settlement of its kind, with many more likely to follow against tech companies including OpenAI, Microsoft and Meta over their use of copyright-protected materials to train generative AI systems.
As you’ll know, this is the class action consisting of authors and publishers who alleged that their copyright has been infringed by Anthropic PBC, which used books from pirate websites Library Genesis (“LibGen”) and Pirate Library Mirror (“PiLiMi”) without permission to train their Large Language Models.
Anthropic had accepted that it had used authors’ books to train its LLM, Claude, but argued that it wasn’t copyright infringement because the use was ‘transformative’.
In June 2025, Judge Alsup ruled that companies who used books to train AI was not a violation of US copyright law, and that the training on legally acquired books may fall under the legal doctrine of ‘fair use’, which would allow for the unauthorised use of copyright-protected material under certain circumstances. Therefore, Judge Alsup had initially ruled in favour of Anthropic.
However, in the ruling, the Judge also said that “Anthropic had violated the authors’ rights by saving pirated copies of their books as part of a “central library of all the books in the world””.
This would mean that the case would go to trial.
We then heard that Anthropic had agreed to settle for $1.5 billion.
The parties agreed a deal in August after Anthropic said it faced “inordinate pressure” to settle and avoid paying upwards of $1 trillion in statutory damages at a trial scheduled for December 2025.
The settlement hearings
The settlement was not, however, approved on 8 September.
The claimants’ attorneys submitted an allocation plan on 22 September proposing an optional 50-50 split of the payout for most of the books, plus a tailored approach for education works.
The matter then returned to the court yesterday, 25 September, when the parties made further submissions to persuade the Judge that the settlement proposed was “fair, reasonable and adequate”.
The claimants’ attorneys assured Judge Alsup that the plan was designed to ensure fair and equitable distribution, whilst balancing efficiency and the need to respect pre-existing contractual arrangements.
Following preliminary approval of this $1.5 billion settlement to resolve the authors’ copyright class action, Anthropic will need to pay about $3,000 for each of the 482,460 books it downloaded from the pirate libraries Library Genesis and Pirate Library Mirror, and it will need to destroy the original and copied files.
The claimants agreed to this settlement because they risked receiving nothing if the case went to trial. Anthropic’s potential appeal of any verdict could also have dragged on for years, and that, at any point, Judge Alsup could have de-certified the class.
The Judge clarified that no attorneys’ fees will be paid until the settlement is complete.
The SoA will help to distribute the court-approved notice of settlement and the claims process to hundreds of thousands of authors.
Authors on the ‘Works List’
We will examine these new developments in detail to better understand what this will mean for authors, translators, and illustrators in the UK who have had works stolen by Anthropic, as well as what precedent this will set for the wider industry.
We will update this website with the latest information when we receive it. For now, we know that if you have a work that fits the eligibility criteria of the class below, you may be able to benefit from this settlement.
The class definition provides that the settlement applies to:-
“All beneficial or legal copyright owners of the exclusive right to reproduce copies of any book in the versions of LibGen or PiLiMi downloaded by Anthropic as contained on the Works List.
“Book” refers to any work possessing an ISBN or ASIN which was registered with the United States Copyright Office within five years of the work’s first publication and which was registered with the United States Copyright Office before being downloaded by Anthropic, or within three months of publication.
Excluded are the directors, officers and employees of Anthropic, personnel of federal agencies, and district court personnel. For avoidance of doubt, only works included on the Works List are in the Class.”
About the ‘Works List’
The ‘Works List’ has been compiled by reference to the raw text of the files that Anthropic downloaded, the accompanying metadata, data from the US Copyright Office and the International Standard Book Number (ISBN) and book metadata from Bowker, the key industry source of such information in the US.
There is, therefore, no need for authors to send any information to the claimants’ attorneys about their books.
Given the definition of “Book” above, the class of books which will be covered by the settlement is effectively closed and there is no benefit in authors now registering works with the US Copyright Office for the purpose of sharing in this settlement.
The settlement website and claims process
We are currently waiting for confirmation of the claims process. Authors do not need to do anything just yet.
If your book qualifies, it will appear on the ‘Works List’, which you will be able to search on the settlement website, here: settlement website.
At the time of publishing (Friday 26 September), the Works List has not been published. Members can check the settlement website and we will also notify members when the List has been published.
For authors with books on the ‘Works List’, you are invited to contact the Settlement Administrator to ensure that they have your current contact details so that:
- you can be invited to comment on the proposed terms or opt out; and
- you can participate in the settlement claim.
We will update this page when we learn more, and you can keep an eye on the Anthropic copyright settlement website for the latest updates.
Formal notice will be sent to class members’ known email and mailing addresses.
The SoA’s role in this action is to help raise awareness of the settlement and claims process.
We have questions about how the settlement will be shared with rights holders, and how allocation disputes will be handled. We will update members when we learn more.
If you have any questions about this process, please do get in touch,
We encourage you to spread this news with your own contacts and networks, too.
Next steps for UK authors
Future US actions
The Anthropic settlement may serve as a model for the settlement of other legal actions which are being brought against AI companies in the US, and therefore registering your works with the US Copyright Office may be beneficial, if you think your books may have been downloaded or torrented from an unlawful dataset. Register Your Work: Registration Portal | U.S. Copyright Office
UK legal actions
The UK equivalent of a class action is a ‘Group Litigation Order’ (GLO). With the success of the Anthropic settlement agreement, it is highly likely that we will now see Group Litigation Orders being filed against tech companies in relation to their use of pirated books to train their AI models.
We will update members about any Group Litigation Orders that they might be able to join.
Next steps for transparency
We now know that transparency in relation to the works that tech companies have used is entirely possible. Governments must legislate for AI tech companies to be transparent about their training data sources as a matter of urgency.
Transparency will enable rightsholders to check if their works have been used, and to more easily control the use of their work. In the publishing industry, we have a well-established licensing system, which allows authors and rightsholders to control if and how their work is used, and to negotiate appropriate remuneration.
We will continue to lobby Government to legislate for transparency and for the lawful use of copyright-protected works through licensing.
A reminder of the law
‘Fair use’ (US) versus ‘fair dealing’ (UK)
The legal principle of ‘fair use’ (in US law) argues that the use of copyrighted work by AI is fair because it ‘transforms’ the work and won’t show copyrighted works to end users.
While ‘fair use’ sounds similar to the UK’s ‘fair dealing’ doctrine, it is fundamentally different, being much wider in scope than the UK’s much more limited ‘fair dealing’ exception to copyright.
We think that neither ‘fair use’ nor ‘fair dealing’, should be a defence against the mass infringement of copyright-protected works that has taken place in the training of AI models.
The scraping of copyright-protected works by tech companies is unlawful under UK law. The UK copyright exception is for strictly limited extracts of a copyright-protected work, which is called ‘fair dealing’ in the UK, and is for a small handful of specific uses.
Fair dealing considerations include: whether the use is commercial (rather than for private research, or teaching purposes, for example), whether the new work would affect the market for the original work, and whether the amount of the original being used is reasonable and appropriate. If use of the new work acts as a substitute for the original or causes the owner to lose revenue, it is unlikely to be fair.
The US ‘fair use’ exception is much broader in scope in that it is not limited to a list of permitted acts [as is the case in the UK regarding ‘fair dealing’]. An important factor to assess whether the use is ‘fair use’ is that the use has to be ‘transformative’, i.e., the new use adds something new, with a further purpose or different character.
‘Transformative’?
US tech companies are relying on this provision, arguing that the use of copyright-protected works by their AI models is ‘transformative’. However, there are other legal requirements for ‘fair use’ to apply, such as the amount and substantiality of the work used, and whether the use could hurt the current market for the original work, and of course, whether the new use is of a commercial nature.
US tech companies are trying to ‘stretch’ fair use, which was originally intended for quoting, inspiration or education, and instead they are ingesting vast quantities of books into their Large Language Models for their commercial gain.
In our view, what tech companies are doing is neither ‘fair use’ nor ‘fair dealing’, it is a straightforward infringement of copyright-protected works on a mass scale.
Further reading from across the pond