Over the last year, I have experimented with AI music tools like Suno and Udio, where you can type a short prompt and get back a fully produced song. Within seconds the system generates lyrics, melody, instruments, and vocals for you, whether you want it to be your own or someone else’s. It is like having an entire recording studio on your laptop. But as exciting as this is, it also raises a number of questions about originality and ownership.
On the one hand, AI is making music production more accessible. You do not need expensive gear, years of training, or even the ability to sing. A student in Rotterdam can now create a radio quality track in the same afternoon they finish an assignment. This could unlock creativity and inspiration for millions who otherwise might never have produced music.
On the other hand, the industry is facing serious challenges. Streaming platforms like Spotify have already had to remove millions of AI tracks flooding their catalogues. Record labels are in talks to license their music catalogues for AI training, but the debate remains: who should be paid when a model learns from an artist’s voice or style? Cases like the viral Fake Drake track highlight how easily AI can blur the line between tribute and impersonation.
Currently, people are still able to notice the differences between most AI-generated songs and those produced by humans. AI music is often described as more meaningless and forgettable, though it is difficult to pinpoint exactly what makes it feel so empty. However, as AI continues to advance, the line of recognition is becoming increasingly thin. Though, I do wonder whether originality and emotional depth can truly ever be replicated.
I think that one possible improvement would be to require AI music tools to embed a subtle watermark or trademark within generated songs. Similar to how some artists use signature sounds or producer tags, this mark would make clear that the track was machine-generated. This would not prevent people from using AI music for inspiration or experimentation, but it would reduce the risk of someone passing off a fully AI-created track as their own original work. This way, listeners could better distinguish between human and machine creativity.
Personally, I found using AI to make music surprisingly fun, but also a little unsettling. When the song finished, I could not quite tell how much of it was mine. Did I really create it, or did I just write a clever prompt? If AI can generate a hit song from a simple text input, should we treat the human prompter as the true artist, or does creativity lose its meaning when machines do most of the work?
References:
Collins, K. C., & Manji, A. (2024, June 15–17). Humanizing AI Generated Music – Can Listeners Hear the Difference? In ResearchGate [Presented at the 156th Convention]. Audio Engineering Society, Madrid, Spanje. https://www.researchgate.net/publication/379671636_Humanizing_AI_Generated_Music_-_Can_Listeners_Hear_the_Difference
Spotify. (2025, 25 september). Spotify Strengthens AI Protections for Artists, Songwriters, and Producers — Spotify. Spotify Newsroom. https://newsroom.spotify.com/2025-09-25/spotify-strengthens-ai-protections/
Coscarelli, J. (2023, 19 april). An A.I. Hit of Fake ‘Drake’ and ‘The Weeknd’ Rattles the Music World. nytimes.com. http://nytimes.com/2023/04/19/arts/music/ai-drake-the-weeknd-fake.html