What the UK Government’s AI Copyright U-Turn Changes for Artists
Last week, an artist I work with was pretty pleased about the news. The Government had dropped the proposal, the music industry had pushed back and won. “So AI can’t use my music without permission now, right?”
The honest answer is technically; it never could. Under UK copyright law, the default position has always been infringement if used without adequate permission and clearance. AI companies need permission or a licence to use any protected works. They always did and they still do.
So why does nothing feel different after this landmark announcement?
If you are sitting there thinking about it, you could be there a while and I don’t mean that sarcastically. It’s not “who won?” The headlines have covered that. The more useful question is what the law actually says, what the announcement changed in real practice, and what it means for an artist sitting on their catalogue right now.
The answer is quite a lot less than most people think. And, it’s about more than doing nothing.

About the Author
I’m Ron Pye, founder of IQ Artist Management. I have spent three decades working through the music industry’s biggest structural shifts: the piracy crisis, the collapse of physical sales, the streaming transition, and now the generative AI disruption. I hold an MA in Music Industry Studies with Distinction from the University of Liverpool, where my research focused on the relationship between technology, law, and the music industry, and a BA in Music Business and Finance.
AI copyright law is something I work with practically every day. It comes up in management agreements, in publishing contracts, and in conversations with artists who are trying to work out where they stand. Understanding the difference between what the law says and what an artist can realistically do about it is no longer academic.
I have worked across many sectors of the industry, including management contracts, music publishing, royalties, sync licensing, and digital marketing. That breadth and depth matter when copyright law shifts, because the impact won’t land in one place. It will move through income streams, contract terms, and platform relationships all at once.
If you are an artist trying to make sense of the government’s U-turn and what it actually means in practice, not just the headline, but where you genuinely stand right now, this is the perspective that comes from managing real artists through that exact uncertainty.
What the Copyright, Designs and Patents Act 1988 (CDPA) Says

UK copyright law works on a simple premise. Someone creates a work, copyright attaches automatically (once it is available for purchase), and from that point anyone who copies or uses that work commercially needs either permission or a licence. No registration is required. No filings, no actions. Infringement is the default position. Permission is what gets you around that default permission. It’s known in the industry as a ‘permissive system’.
There is a section that matters here. Section 29A of the Copyright, Designs and Patents Act 1988, inserted in 2014, created a limited exception for text and data mining. But read it very carefully. That exception covers research purposes only, and only where there is no commercial motive. An academic institution training a model to study language patterns is fine. A commercial AI company scraping catalogues of recorded music to build a product, that’s clearly a commercial incentive and is not covered.
A lot of people assume that the UK adopted something like Article 4 of the EU’s 2019 Digital Single Market Directive, which allows commercial text and data mining (TDM) with an opt-out mechanism for rights holders. It didn’t. The UK left the EU before the Directive was transposed into national law and also chose not to bring in an equivalent. So, while European AI companies work within a slightly different framework, UK rights holders actually sit in a stronger legal position on paper. There is no commercial exception in UK law for TDM, at all.
It’s also worth being clear about the scope here. The question of who owns the output of an AI system is a separate and genuinely contested area. The AI Generated Music Copyright piece I have written previously covers that in detail. Here the focus is on training data. Specifically, whether AI companies have the right to use your music to build their models in the first place.
In short, they don’t. Not without a licence anyway.
What the Proposed TDM Exception Would Have Actually Done

On 17th December 2024 the UK Intellectual Property Office (IPO) and the Department for Science, Innovation and Technology (DSIT) ran a major consultation of stakeholders from the creative, technology, and legal sectors on text and data mining. Entitled “Copyright and Artificial Intelligence” over 11,500 responses were logged by the time it closed on 25th February 2025. The proposal on the table was to create a broad exception covering commercial AI training, paired with an opt out mechanism so rights holders could block their works from being used if they so wished. The Government presented this as a balanced approach. It wasn’t.
Right now, the burden sits with AI companies. They need a licence. Train on someone’s music without one and you’re infringing the law. This submission would have flipped that on its head. Commercial AI training would have become permissible by default. Artists would have needed to actively opt out of every dataset, every platform, every training run they didn’t want their work included in. I think the Government have indirectly suggested that that was just not a workable position.
Think about what that means for an artist with no management, no legal team, and no reliable information about where their music has ended up online. You’d need to know which AI companies were training on what. Then file opt outs with each of them, in whatever format they required, before any training took place. And if you missed one, either because you didn’t know about it or the process wasn’t straightforward or accessible, your works would be fair game. Permanently.
That’s not a compromise. It’s a transfer of responsibility from the party with the resources able to manage it to the party without them. In a market where a lot of artists operate entirely alone, that distinction matters, massively.
The music industry’s response was to unite. UK Music, the BPI, PRS for Music, PPL, and the Music Publishers Association all submitted formal opposition to the proposal to urge the government to mandate for transparency in the training of AI, and to create a licensing marketplace for AI developers and copyright holders. I’d argue that argument was structural as well as political. The House of Lords sided with that view when the issue came to a vote, but the political outcome is only part of the story.
The U-Turn Left Nothing Behind It
When the Government announced it was not proceeding with the broad TDM exception, coverage in the music press was broadly positive. Artists had pushed back. Parliament had listened. The opt out proposal was dead.
And then what?
Well, we appear to be at a standstill as there is no alternative framework. No enforcement mechanism. No announcement from the Intellectual Property Office about how existing rights would be policed in the context of AI training. The IPO website carries guidance on AI and IP but nothing resembling an active investigation into commercial TDM infringement. The status quo returned, which sounds like a win until you look at what the status quo actually is.
UK copyright law says commercial AI training requires a licence. Nobody is checking whether AI companies have one. There is no registry, no audit requirement, no reporting obligation. No regulation. And, unlike broadcast licensing, where the BBC cannot legally transmit music without a blanket licence from PRS for Music, there is no comparable gate that AI companies are required to pass through before training on protected works.
Class action litigation is the other route that sometimes comes up. But, effectively, the UK has nothing like the US class action system. Individual rights holders cannot easily pool a claim the way US labels or publishers can. The cost and complexity of a UK copyright infringement case is largely prohibitive for most artists.
So, this is where things stand. The law is with you. Enforcing it is the ongoing problem.
The Real Problem Is Enforcement Infrastructure

The law gives artists a stronger position than most often realise. But whats a right if you can’t, or don’t know how to enforce it? It’s a right in name only, and that’s roughly where things are in 2026.
PRS for Music collected over £1.3 billion in royalties in 2024. It operates by pooling your rights across hundreds of thousands of members. Then, it negotiates licences with broadcasters, streaming platforms, and venues, and distributes the proceeds accordingly. PRS, dates back to 1914, that infrastructure took more than a century to build. PPL, which covers performer rights and record company rights for broadcast and public performance use, has been operating since 1934. And, similarly, MCPS handles mechanical rights for the reproduction of musical works. Each of these organisations exists because at some point, the music industry recognised that individual enforcement was impossible and collective action was the only viable route.
None of them, currently, have a mandate to address AI training data.
That’s not a criticism, they all do an absolutely fantastic job, it’s a structural observation in an industry changing at the fastest rate in living memory. Their licensing frameworks cover specific, defined uses. AI training is a new category of use that none of these bodies was built to handle. No collective licensing body in the UK currently has the scope or the infrastructure to negotiate on behalf of rights holders in this space, and none, as far as we are aware, has been asked to develop one.
So what would meaningful enforcement really need? At minimum, something like a registry, a database that AI companies are required to check before training on protected works, or a certification requirement, similar in principle to how broadcast licensing works, where commercial deployment of an AI model requires evidence of cleared training data. Some version of this is moving through EU policy thinking, though the AI Act as passed focuses more on transparency than on licensing obligations. The UK, having left the EU, has no equivalent in development.
There is one voluntary model worth knowing about. Fairly Trained, founded by Ed Newton-Rex after he resigned from Stability AI over its approach to copyright in 2023, certifies AI companies that train only on licensed or public domain content. Some companies have signed up. But it is entirely voluntary, carries no legal force, and the largest players in the space have not yet participated.
The music industry built PRS because individual enforcement became impossible. The same logic applies here. It’s worth asking who builds the equivalent, and how long it is going to take.
What Artists Can Do Right Now
The question I get most, usually after someone reads a headline, is some version of “So what do I actually do?” And, it’s a fair question. There’s no single action that is going to fully protect you, but there are things worth doing that will matter when the enforcement infrastructure eventually arrives.
Start with your registrations. Getting your works properly registered with PRS for Music and PPL is no longer optional if you take your catalogue seriously. If AI companies are eventually required to licence training data or pay into a collective pool, that money will flow through existing royalty distribution channels. Artists with incomplete metadata, unregistered works, or inaccurate performer credits will be at the back of the queue. This is exactly what happened in the early days of streaming. The infrastructure existed but vast amounts of money sat in suspense accounts because the data behind the music was, frankly, a complete mess.
Platform choices matter more than most artists realise too. When you upload music to a platform, you are agreeing to its terms of service, and those terms vary considerably in how they treat AI. Some platforms have already signed deals with AI companies. Others have updated their terms to allow uploaded content to be used for AI training. Know what you’re signing before you sign it. Distribution still matters, particularly when they appear to be being acquired by some major players. So this maybe isn’t about avoiding platforms. It’s more about reading the small print, which none of do in reality.
The Suno and Udio litigation in the US is worth following. Both cases saw the RIAA bring copyright infringement claims against each company over AI training data. Tellingly, both settled out of court, rather than going to a trial. That matters because a settlement closes the litigation without establishing a legal precedent. But if future cases produce rulings, UK courts will most certainly take notice.
And, so, our own Parliament. I’d advise that waiting for Parliament to fix this is a reasonable frustration we all have but, it’s not a viable strategy. Legislation in this space is years away at best. The practical protection available right now is registration, informed platform choices, and keeping documentation of your copyright ownership in order. It’s not glamorous, it’s not futureproof but it’s as solid as it gets at the moment.
The Honest Answer
So circling back to “AI can’t use my music without permission, right?”
Technically, yes. It never could. Under CDPA 1988, commercial AI training on your music without a licence is still infringement. That was true before the consultation, during it, and it’s true now. This announcement didn’t change the law because the law was already on your side.
What it didn’t change is the infrastructure. There’s no collective licensing body with a mandate here, no registry, no enforcement mechanism, and no realistic route for an artist to pursue any claim. The status quo the announcement returned to is one where rights exist and enforcement doesn’t.
And, that’s the honest position in 2026, murky waters.
Enforcement infrastructure will arrive eventually. When it does, the money will move through existing royalty distribution channels. The artists with their registrations in order, metadata complete, and copyright documentation solid will be the ones who benefit. Not the ones who waited to see how it played out.








