Spotify wants people to know when a song was born from an algorithm.
The Swedish company announced this week a set of measures aimed at identifying and labeling music created with artificial intelligence (AI), in a bid to bring order to a terrain that is advancing faster than its own legislation.
It is not only a technical decision, but also a political, aesthetic and commercial one.
Music is no longer just human. But who should say it, and how?
By: Gabriel E. Levy B.
In the history of music, media have always shaped art: from the score to the vinyl, from the cassette to the algorithm.
However, rarely as today has this mediation become so radical.
The advent of generative artificial intelligence, capable of creating voices, melodies, and lyrics autonomously or semi-directed, disrupted the rules of the game.
Until just a couple of years ago, the use of AI in music was a laboratory experiment.
Composers such as David Cope, who in the 90s developed EMI (Experiments in Musical Intelligence), already raised the disturbing possibility that a machine could imitate the style of Bach or Mozart with surgical precision.
But it was a niche, almost philosophical exercise.
The disruption came when streaming platforms like Spotify began hosting thousands of AI-created tracks without clear labeling. And, more importantly, without the listener being able to easily distinguish whether what they were hearing came from a human mind or a neural network.
This is the context of Spotify’s announcement.
The implementation of the DDEX (Digital Data Exchange) standard, a collaborative industry initiative to standardize the metadata of musical works, will allow labels and distributors to report whether there was use of AI in the creative process.
A measure that, although technical, has profound symbolic implications: it is no longer just about what is heard, but about who (or what) composed it.
A tool, not an artist
For Spotify, the challenge is no small one.
In its constant quest to offer more content, retain subscribers and reduce licensing costs, AI has become a tempting ally.
According to a report by MIDiA Research published in 2024, approximately 12% of new songs uploaded to digital platforms that year included some AI-generated component.
But this automation also generated distortions: millions of tracks designed to optimize recommendation algorithms, 30-second songs to inflate numbers, and most controversially, vocal imitations of real artists without their consent.
Faced with this scenario, Spotify proposed a triple front of regulation. First, prohibit vocal identity theft through deepfakes, unless expressly authorized by the original artist.
A key measure, especially after cases such as FakeDrake, where a viral song imitated the rapper’s voice without any authorization.
Second, to combat “music spam”: massive uploads, duplications, irrelevant content that saturate the platform.
In the last twelve months, the company removed more than 75 million tracks under this category.
And third, establish a transparent labeling system, starting with those labels and distributors that have already adopted DDEX as a reporting standard.
What is at stake is not only the integrity of the platform, but also the relationship between listener and music.
Because if everything sounds “good” but nothing says anything, where is art?
The algorithm also wants to sing
One of the cases that marked a before and after was that of The Velvet Sundown. Last June, this completely AI-generated band reached more than three million streams on Spotify.
Only later did it become known that none of the voices, instruments, or lyrics came from humans. Not even the names were real.
The impact was immediate.
The public felt, in part, deceived. Was it music, performance, experiment or a covert marketing campaign?
The phenomenon exposed the central dilemma: is it valid for an AI-generated song to compete on equal terms with one made by humans? Is it enough to label it? What if the audience prefers the hyperproduced sound of a neural network to the imperfection of a human voice?
According to scholar Anahid Kassabian, author of Ubiquitous Listening: Affect, Attention, and Distributed Subjectivity, music is no longer just a form of expression, but a ubiquitous presence shaped by technological contexts.
In this new ecosystem, authorship is diluted, the experience is fragmented and the listener becomes a user.
For the philosopher Bernard Stiegler, the automation of culture entails the risk of losing symbolic individuation, that is, that which allows people to recognize themselves in a work.
If everything becomes generable, replicable, predictable, what is eroded is not technical quality, but the capacity for significance.
Spotify is trying, with these new rules, to avoid this drift.
But the risk remains: AI-generated music not only mimics styles, it also simulates emotions. And in an era where emotions are capital, that simulacrum can become hegemony.
Who composed this song?
Beyond the case of The Velvet Sundown, other recent episodes reinforce the urgency of a regulatory framework. In 2023, the song Heart on My Sleeve, which used imitations of Drake and The Weeknd’s vocals, went viral on TikTok before being removed due to copyright issues.
Its author, an anonymous user named Ghostwriter, argued that he did so as a “critique of the state of commercial music.”
But the impact was such that it was even discussed whether she should be nominated for the Grammys.
In parallel, platforms like Deezer began systematically tagging AI-generated songs. And YouTube, with its Dream Track tool, allows some creators to use voices from licensed artists. Each actor in the ecosystem takes a stand, revealing the absence of a clear global policy on the issue.
Even well-known artists are beginning to react.
Singer Grimes offered her voice as open source for anyone to use with AI, as long as they receive a share of the revenue.
A model that mixes ethics, economics and experimentation, and that could inspire new forms of collaborative human-machine creation.
But the path is still uncertain.
As musicologist Eduardo Viñuela warns, “the real problem is not the existence of AI-generated music, but the lack of a framework that allows us to understand its aesthetic, legal and emotional status within the musical ecosystem.” And that requires more than just labels.
In conclusion, Spotify’s decision marks a turning point in the relationship between technology, art and platforms. Music created by artificial intelligence is already a reality, but it still lacks an ethical, legal and symbolic framework that clearly situates it. Labeling is not enough, but it is a first step. Because in a world where even emotion can be encoded, the challenge is not for machines to compose, but for us not to forget why, for what and for whom it is composed.
References:
- Wired English. (2025, September 26). Spotify tightens its policies to regulate AI-generated content. https://es.wired.com/articulos/spotify-endurece-sus-politicas-para-regular-contenidos-generados-con-ia
- (2025, September 25). Spotify is dealing with an avalanche of songs made with AI, so it has decided to react to set limits. https://www.xataka.com/robotica-e-ia/spotify-esta-lidiando-avalancha-canciones-hechas-ia-asi-que-ha-decidido-reaccionar-para-marcar-limites
- Newsweek English. (2025, September 24). Spotify faces AI in music: new measures seek to protect artists and listeners. https://newsweekespanol.com/entretenimiento/spotify-se-enfrenta-a-la-ia-en-la-musica-nuevas-medidas-buscan-proteger-a-artistas-y-oyentes



