Update Tuesday 6 August: We have heard reports of a number of other publishers – including Wiley, Oxford University Press and Cambridge University Press – entering or considering entering into similar deals that would give AI systems access to their catalogue.
The details remain far from clear and there may be the intention to consult the relevant authors and pay them properly where consent is given and use is made. But we are concerned that such deals might be going ahead without consulting, let alone obtaining consent from or making any payment to authors. This could also mean future harm to authors in the form of reputational damage to their profession and a loss of commission opportunities.
Such uses will also harm the quality, integrity and essential variety of materials being used in education and academia, as these would be created without the safeguards and context of clear provenance – in turn harming the dissemination of information. We are also concerned about potential data protection, privacy and a range of legal liability issues which do not yet seem to have been fully addressed.
We were concerned by reports last week that Taylor & Francis Group, an academic book and journal publisher owned by Informa, has licensed its rights to some or all of its catalogue of titles, to Microsoft to develop artificial intelligence (AI) systems. We know that news headlines can be misleading, and we are seeking urgent clarification from Taylor & Francis Group as to the nature and extent of any licensing to Microsoft.
Imprints of Taylor & Francis Group include Routledge and CRC Press.
According to an Informa press release from May 2024, the agreement with Microsoft includes payment of both an initial data access fee (US$10m+) and a recurring payment across three years (2025, 2026, 2027).
In this currently unregulated landscape, we are very concerned to see publishers signing deals with tech companies without consulting their authors and creators first. There are substantial copyright, moral rights and data protection questions that need to be addressed, as well as ethical questions about transparency and fairness of payment from (authorised) uses of creators’ works by AI tech companies. The impact of such uses on traditional sales of authors’ works – and authors’ professional futures more widely – must be considered.
Creators are the backbone of our world-leading creative industries: we cannot emphasise enough the vital need to protect their work, their unique creative skills, their identity and voice, and an environment in which they can afford to continue to create.
What is the SoA doing?
In our recent response to the UK general election, we urged the new Labour Government to legislate to regulate AI developers, to ensure the transparency of AI systems – including generative AI systems – and that creators are credited, paid, and asked to consent before their copyright-protected works are used and reproduced.
We are asking Taylor & Francis Group for clarification on the nature and extent of any licensing deals.
See our Where We Stand page on AI for more information on our work in this area.
What can authors do?
If you find that your work has been used without consent, you can contact the SoA for bespoke guidance. We would also encourage you to contact us to share experiences and feedback – this is vital to help inform our policy work.
We have also published practical guidance for authors concerned by the potential impact of generative AI.
The Authors’ Licensing and Collecting Society (ALCS) is currently canvassing views on collective licensing options to ensure authors are compensated for the use of their works by generative AI systems. We urge any author who is an ALCS member to complete the survey to assist it in this vital research.