When AI makes music

9

October

2025

No ratings yet.

In the past summer, I have spent a bit of time experimenting with Suno, an AI music generation tool. This application turns short text prompts into complete songs. This especially interested me because I enjoy playing instruments, but have no experience in the creation of digital music. This new application makes it very easy to create melodies, harmonies, and lyrics that sound coherent. Suno and other comparable applications lower the barriers to music production, making music production accessible to individuals with no prior experience in creating music.

The ease with which these applications can now be used reflects a broader transformation in how music is composed and experienced. Briot (2020) explains that advances in deep learning have allowed AI models to learn musical structures from large collections of data and to generate new music that fits within those styles. He draws a distinction between two types of generation. On the one hand autonomous generation, where the system creates music on its own, and on the other hand composition assistance, where the user guides the process through creative input and feedback. In my opinion my experience with Suno fits this second description best. Because the tool does not replace creativity completely. It translates my prompts into musical output. In the way I used Suno, the AI application acted more as a creative assistant but sometimes also as an autonomous generator.

Apart from the fact that these new applications are very useful for people with no musical knowledge to suddenly create music, they also come with significant legal and ethical challenges. Gervais (2019) notes that copyright law is based on the concept of human authorship. Since AI operates mostly autonomously, it falls outside traditional legal definitions of creativity and ownership. This legal challenge became clear to me when I tried sharing an AI-generated track to Soundcloud and was asked to confirm the ownership rights. Although I had shaped the prompts and creative direction, I could not confidently claim to be the legal author of the music.

Hugenholtz and Quintais (2021) argue that creativity in copyright law involves three stages: the conception of an idea, its execution, and the final refinement of the work. Copyright protection, they note, requires meaningful human input across these stages. When an AI system carries out most of these steps autonomously, without relevant human creativity, the result cannot be regarded as a protected work. In my experience, a tool like Suno automates much of this process: the user provides a brief prompt, but the system composes, arranges, and polishes the final piece. As a result, the main creative labour lies with the algorithm rather than the human user. This not only limits the legal protection but also raises ethical concerns about authorship and artistic responsibility. It reinforces my earlier point that while applications like Suno make music creation remarkably easy, they also blur the boundaries of what it means to be creative human being.

Sources:

Briot, J. (2020). From artificial neural networks to deep learning for music generation: history, concepts and trends. Neural Computing and Applications33(1), 39–65. https://doi.org/10.1007/s00521-020-05399-0

Gervais, D. J. (2019). The machine as author. Iowa Law Review, 105(5), 2053–2106. https://ilr.law.uiowa.edu/print/volume-105-issue-5/the-machine-as-author/

Hugenholtz, P., & Quintais, J. P. (2021). Auteursrecht en artificiële creatie. Auteursrecht, 47–52. https://pure.uva.nl/ws/files/61822465/Auteursrecht_2021_2.pdf

Please rate this

Meta’s newly launched smart glasses

19

September

2025

5/5 (1)

Yesterday, Meta introduced three new models of their smart glasses: the Meta Ray-Ban Display, the successor of the Ray-Ban Meta and the Oakley Meta Vanguard. These glasses are more than just gadgets. They serve as platforms for information, managing data in real time while being worn just like any other pair of glasses.

Smart glasses are no longer something futuristic, they are already being tested and used in a variety of settings. Kim and Choi (2021), for instance, reviewed how these devices are being applied and noted that they can support tasks in healthcare, education and other parts of industry, by giving users hands-free access to information. However, research conducted by Laun et al. (2022) point out that the bigger question is not whether the technology works, but whether people actually want to wear it. Acceptance is largely determined by whether the glasses are comfortable and easy to use. This helps explain why Meta is rolling out different versions of its glasses: some aimed at everyday convenience, others targeting more specialized needs such as professional work or sports.

A key element of Meta’s new devices is gesture-based control. Instead of buttons or touchscreens, users can interact with the glasses through subtle hand or wrist movements. research demonstrates that real-time gesture recognition can make wearable devices more efficient and intuitive, which is essential if smart glasses are to become part of everyday (Lu et al. 2022).

Speaking from personal experience, I can see a clear application in sports. As someone who enjoys road cycling, I often rely on a phone mounted to my bike for speed, distance and my heartrate. While useful, it forces me to look down, which can be distracting and most of all unsafe. Smart glasses that project this information directly in my field of vision would make cycling much safer. If companies can solve issues like battery life but especially weather (rain) resistance, I believe these dev genuinely change how cyclists train.

  • Kim, D., & Choi, Y. (2021). Applications of smart glasses in applied sciences: A systematic review. Applied Sciences, 11(11), 4956. https://doi.org/10.3390/app11114956
  • Laun, M., Czech, C., Hartmann, U., Terschüren, C., Harth, V., Karamanidis, K., & Friemert, D. (2022). The acceptance of smart glasses used as side-by-side instructions for complex assembly tasks is highly dependent on the device model. International Journal Of Industrial Ergonomics, 90, 103316. https://doi.org/10.1016/j.ergon.2022.103316
  • Lu, Z., Cai, S., Chen, B., Liu, Z., Guo, L., & Yao, L. (2022). Wearable real-time gesture recognition scheme based on A-mode ultrasound. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 30, 2623–2629. https://doi.org/10.1109/TNSRE.2022.3205026

Please rate this