Universal Music Group has been asking music streaming services like Spotify to stop developers from scraping its material to train artificial intelligence (“AI”) bots to enable users to create new songs from text prompts. The label – which controls about a third of the recorded music industry – has also been issuing substantial numbers of takedown requests in response to uploads of AI-generated music on to music streaming platforms. It is the latest move in the music industry’s growing battle to prevent AI generators from using songs without licensing them.
Behind these efforts to enforce copyright, the big issue centers on how governments will balance AI against human creativity. In particular, the government in the United Kingdom is threatening to water down copyright laws to benefit tech companies at the expense of not only the music industry but also creative businesses like literature, films, and photography. So, what’s going on?
AI Music & Copyright
On royalty free music generators like Mubert, it is already possible to type in a text prompt and the program will use an AI model to search a catalog of music for patterns. Tell it to play a “fast voodoo rhythm in the style of a nursery rhyme with some pretty electronics,” and it will copy parts of songs that are in line with that prompt (and that are among the vast amount of data that it has been trained on) and will generate music to match. You can also generate music that sounds like a particular artist or specific song, and then, whatever music you create is downloadable.
While Mubert claims to be “on a global mission to empower creators,” it is unclear how that squares with not paying human creators royalties for the use of their music as part of the process of training the algorithms to create the right output. Mubert even emphasizes that its audio material is made “from real musicians and producers,” recognizing that the value in the music is coming from human creators.
Since music is protected by copyright law (for the musical work and the sound recording), anyone wanting to use a song generally has to obtain a license. This ensures that rightsholders and creators are paid properly for their creativity. For example, Spotify pays a license to record labels and artists to put music on its platform; the same is true of establishments like bars, cafes, and clubs that play records for their customers, and for artists, who sample someone else’s song in their own track.
Against this background, if AI programs are using labels’ music catalogs without permission, they could be seen to have infringed music rights in at least two ways: by using the music to train the AIs, and in copying parts of the music that the AI produces from the training data. If the streaming platforms were seen to have facilitated such illegal activity, they could be found to be indirectly liable for copyright infringement, comparable to an pirated music downloading platform like The Pirate Bay, for instance.
Unfortunately for the music industry, the government in the UK has been muddying the waters with proposals to change the copyright rules to benefit tech companies. A few months ago, it floated the idea of making an exception for the first type of infringement: using music catalogs as training data. This copyright-centric exception for text and data mining would also apply to other artistic works like videos and photographs.
There are already copyright exceptions in the UK where permission/a license for reuse is unnecessary, such as for the purpose of “criticism, review or quotation,” though there are limitations to make sure this is done fairly.
When governments want to create a new exception, they must follow three requirements set out in the Berne convention. The exception: (1) must be for very specific special circumstances; (2) must not interfere with the normal exploitation of the work; and (3) must not unreasonably prejudice the rightsholder. It can be argued that the UK proposal for “licensing or exceptions to copyright for text and data mining” does not meet any of these steps and thus, would be contrary to international law.
The Battle for the UK
The proposed exception was met with widespread objections, with only 13 out of 88 responses to the consultation in favor. Inn Janaury, the House of Lords Communications and Digital Committee said the proposal is “misguided” and should be scrapped. The government did appear to backtrack in response in February, with science minister George Freeman saying it would not take the exception forward.
In March, however, the Secretary of State for Science, Innovation and Technology published a policy white paper – entitled, A Pro-Innovation Approach to AI Regulation – which raised the prospect that the Department for Science, Innovation, and Technology might be reviving its previous approach. The white paper aims to prioritize the positioning of the UK as a tech-friendly environment and emphasizes “the role of regulation in creating the environment for AI to flourish.” It mentions risks related to mental health, privacy rights, and human rights that could stem from the widespread adoption of AI; it does not list any threats to intellectual property. (In fact, the lengthy policy paper mentions “intellectual property” just once.)
THE BIGGER PICTURE: The Department for Science, Innovation, and Technology’s paper comes at a time when governments around the world and international organizations, such as the World Intellectual Property Organization, are considering how laws should be adapted to address AI. Legislators in Japan and Singapore, for example, are already introducing copyright exceptions along similar lines to those being discussed in the UK. This is also a major concern for the creative industries, but not to the same extent as the UK, which tends to be particularly influential in the realm of intellectual property law around the world.
There are no proposals for copyright exceptions in the U.S. or the European Union. In fact, in the U.S., intellectual property laws are currently being tested by a number of pending lawsuits that center on AI, including the copyright and trademark suit that photo giant Getty Images filed against Stability Diffusion, which it alleges has been scraping its images to train its model and ultimately, to generate new ones. U.S. copyright has a “fair use” exception, which could potentially be a defense for these operators, and Getty wants confirmation that such a defense does not apply in this case. It has also initiated litigation along the same lines in the UK, which is at an earlier stage.
The bottom line, to some extent, is whether we still believe human creativity deserves greater protection than machine creativity. Appealing to tech might seem like a good strategy for the UK, but the creative industries contribute hugely to the economy – £109 billion in 2021, or nearly 6 percent of total GDP. Beyond that, the value of music is not limited to raw economics, as it offers emotional comfort, health benefits, and even inspires social, political, and economic change. The creators behind such works should arguably be rewarded for this, too, whether they are responsible for composing music directly or providing the material that AI repurposes. Copyright law is supposed to ensure that creators are fairly remunerated for their work. When it brings such value to the world, it seems like a strong argument for protecting it.
Hayleigh Bosher is a Senior Lecturer in Intellectual Property Law at Brunel University London. (This article was initially published by The Conversation.)