The Great British AI sell-out

I just attended an online town-hall meeting organised by the WGGB about the current UK government consultation about the AI industry’s abuse of copyright. Mostly, us writers are concerned about the kind of AI known as Large Language Models, as this is the kind of AI that most directly affects writers. I’ve been avoiding meetings about AI because, well, too much unfocused handwringing and/or naive boosterism. Thankfully this conversation was focused, informed, measured and actually moderately useful.

Here’s the backstory: rather than asking creatives what they want, the government has presupposed that us writers are cool with complying in advance. Rather than asking if what the LLM creators are doing is legal, ethical or desirable, they’ve skipped to the “let’s just be pragmatic about this” stage and drawn up a set of proposals that place the burden of protecting the creative industry on the people making the work rather than the people seeking to strip-mine it.

The options outlined in the consultation boil down to:

  1. Leave copyright law as it is, and allow AI companies to continue abusing it (really a non-option, included to give the illusion of greater choice).
  2. Create an opt-in system which assumes a default position that AI companies cannot scrape works unless express permission has been given by the rights holders*.
  3. Create an opt-out system that assumes a default position that the AI companies can scrape works UNLESS the rights holders have informed them that consent has been withdrawn.

* not necessarily the writers because not everyone who writes retains their rights

Reading between the lines it seems that the ‘preferred option’ – the one that makes it look like the government is doing something to protect us while actually not placing any responsibility on the AI snake-oil salesmen – is option 3. There is, of course, no detail about what the opt-out system would look like or how it would function. Writing to each different AI company would require constant vigilance from writers/rights holders who need to spend their time doing productive work. If writers can opt-out via a central government website, similar to how we pay our self-assessment tax, that would be nice. But how, then, would that information be acted upon by the AI companies? They aren’t going to manually verify each individual source of scraping.

It seems more likely that there would be some kind of ‘token’ that gets attached to works made available digitally, like the robots.txt file that sits on your website preventing the wrong kind of bots from scraping it. But no-one has outlined what this protocol might look like. It’s almost as if it’s a bad faith argument.

The opt-in is at least easier to administer: the AI companies can only use your work if you expressly provide it to them. Obviously the AI companies aren’t going to go for that. If this does become the way forward they’ll probably find a way to argue that will still scrape the data, but they just won’t use it if it’s tagged properly. In other words, they’ll smoke but they won’t inhale.

And of course, not all rights holders *want* to withhold the works they control from the AI slop-barons. Academic publishers like Wiley, Taylor & Francis, Oxford University Press and Cambridge University Press (who, we should note, don’t pay the writers or editors of their books despite charging university libraries incredibly high prices for the books they publish) have already sold the contents of their catalogues to OpenAI, etc.

https://thenewpublishingstandard.com/2024/08/03/as-more-academic-publishers-embrace-ai-trade-publishers-need-to-get-off-the-fence/

The EU approach to AI regulation seems to be more robust, requiring transparency about training model data. It’s also more sceptical about the supposed ‘benefits’ of the technology:

https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

It feels like we ought to refuse to engage on the terms laid out by the government, highlighting that the way the EU manages this is the best benchmark. But the chances of the government listening seem slim, especially because Keir Starmer has just appointed an ex-Amazon exec to head the Competitions and Markets Authority.

https://pluralistic.net/2025/01/22/autocrats-of-trade/

My take on the mood of the meeting seemed to be that we’re screwed, and that our best hope is that AI is a hype bubble that bursts sooner rather than later. As a creative worker, living and working in mainland Europe has never looked more attractive. (And, of course, this all begs the question: will Northern Ireland be covered by EU law on AI?)