Taylor Swift has registered her voice as a trademark to protect against AI In the music industry, there's a growing need for protective measures against new generative technologies

Some tracks explicitly created with artificial intelligence have achieved quite a bit of success. On Spotify, in particular, some of these songs have even managed to enter fairly well-known playlists. It is therefore plausible that this phenomenon will become increasingly common, also considering how AI software has simplified and made the music production process more accessible.

For this very reason, the company that manages the artistic rights and image of Taylor Swift has begun registering certain expressions related to the singer’s voice as trademarks, as well as a specific image taken from one of her concerts. The aim is to prevent these elements from being used without consent in the future within tools based on artificial intelligence.

The problem of AI-generated music

@nbcnews

Taylor Swift is taking new steps to protect her voice and likeness from AI misuse.

original sound - nbcnews

Generative AI tools are based on vast archives of texts, images, videos, and audio, almost always protected by copyright. Clearly defining which materials are actually used by artificial intelligence software to produce music and more, and under what circumstances, is evidently difficult. For some time, however, there has been suspicion that a large amount of audiovisual content available online has been generated without adequate safeguards for the intellectual property of the sources used, also due to the relative immaturity and limited regulation of the sector.

Some AI systems are now capable of producing tracks by imitating the voice of a specific artist, without fully replicating it. In these cases, traditional copyright protections are often insufficient. In this sense, registering a vocal expression, among other things, can provide an effective legal basis to counter certain imitations created with artificial intelligence without permission.

Some major American record labels - including Universal and Sony - a few years ago also filed a lawsuit against two AI software companies, Suno and Udio, accusing them of using copyrighted material to train their systems to generate new music tracks. In the past, moreover, at least 200 prominent musicians - including Billie Eilish and Katy Perry - had signed an open letter asking AI companies to stop using their songs to train AI systems.

How are streaming platforms responding?

Bandcamp, the leading digital platform for independent music, has announced that it will ban the upload of AI-generated songs. The streaming service - the first to establish such a ban - has stated that users who come across tracks that appear to be created with AI will be able to report them to a dedicated team, which after the necessary checks will proceed with removal if needed.

The issue of AI use in music does not concern only artists, but also listeners. It is no coincidence that online and on social networks it is not uncommon to come across DIY methods, more or less effective, that show how to try to identify or avoid AI-generated tracks on streaming platforms.

Spotify itself has introduced a mechanism allowing artists to indicate whether a track was created using artificial intelligence, but this is purely voluntary. There is still no upstream control system on the platform, and this partly limits transparency toward users. Other services, such as Deezer and Apple Music, seem instead to be testing more structured methods, for example through a system of clearly visible labels.