“Exploring AI-Generated Music and Deepfake Voiceover”

9

October

2024

No ratings yet.

Two years ago, I stumbled upon a new song called “Heart On My Sleeve” by Drake and The Weeknd that turned out to be AI-generated. The melody of the song sounded very much like a song that Drake would make, and I wasn’t able to tell the difference between the voice of “AI-Drake” and the artist himself. The song definitely had a certain charm, but it lacked the smoothness of an actual studio-produced song. It sparked a debate about authenticity, ownership, and the impact on real musicians in the music industry. 

I remember that when AI generated songs first came out that most people on social media were shocked in amazement. There were positive and negative reactions, some finding AI a fun and accessible way to create music, and others believing AI could ruin the music industry by suppressing creativity and triggering legal issues. It also didn’t take long for companies such as Universal Music Group and other major music labels to take legal action against AI-generated songs, which meant that “Heart On My Sleeve” was taken off all music streaming platforms within a week. 

Over the last two years everyone has gotten more used to artificial intelligence, and artificial intelligence in music has made a few steps. AI-generated songs sound less clunky and much more polished, and AI-tools like AIVA and Amper Music are widely used in the music industry to help producers compose and arrange their songs (Marr, 2023).  On the other hand, songs that are entirely AI-created do not seem to be very popular at the moment. What AI is mainly used for in music are voice-overs and covers. Social media platforms like Instagram and TikTok are filled with videos of Patrick from Spongebob singing “All I Want For Christmas” by Mariah Carey or Eric Cartman from Southpark singing “Chandelier” by Sia. These so-called “deepfake voices” sound extremely realistic, and I wanted to discover if I could make such an audio myself. 

After doing some research I found that a lot of content creators used Kits.ai for deep fake voice-overs, but it is only possible to make a proper deepfake voiceover if you have a computer with high performance components and a lot of memory, which I do not. I tried to get as far as possible so I quickly created a free account, and all I had to do was type in the inputs for my song. My idea was to have a popular Dutch rapper called Boef sing the lyrics of his song “Paperchase” to the melody of “German Schlager” music. I had to upload 5 minutes of audio with his voice in it, so I uploaded two audio files with songs from Boef. The next step was to add the song input, which were “Paperchase” and “German Schlagermusic”. After giving the system some prompts on the song, it loaded for a few minutes and came with an output. The result was a very raspy tune that definitely sounded like German folk music, only the lyrics were impossible to make something out of. All in all I still found it surprisingly good.

Sadly I couldn’t upload the audio file because once I uploaded it, it immediately removed my audio due to copyright claims. It was a really fun experience, and I hope that in the future AI tools like this will be developed further, so that more people can experiment with it and my computer can handle the software better.

Sources:

Please rate this

Can AI Protect Privacy Without Creating Bias in Businesses?

12

September

2024

No ratings yet.

Last year, someone in my family began a marketing internship at a well-known FMCG company and shared a story about ChatGPT’s use there. At the time, ChatGPT and other generative AI tools were widely used. He told me that the company had stressed to its employees that they should never use company information in these AI tools because of the risk that sensitive data could be exposed or misused. Eventually, OpenAI sites were even blocked, frustrating many interns who used ChatGPT for translations on marketing materials.

Shortly after blocking these tools, the company developed its own proprietary AI model to ensure data privacy. While this approach was crucial for protecting confidential information, the model had limitations. It frequently mentioned the company’s name, exhibited bias towards the company, and some of the information it provided was simply inaccurate.

I looked into proprietary AI models and discovered that issues with bias in these systems have been ongoing for years, such as the bias found in Amazon’s AI recruiting tool back in 2018.¹ The tool, which was internally used by Amazon, was trained on resumes submitted to the company. However, since most of these resumes came from male applicants, the AI system developed a bias against women. This bias was not intentional, but it arose from the training data reflecting past gender imbalances in the tech industry. Amazon eventually had to stop using this AI tool due to its biased behavior.

You can see that even with proprietary AI models designed to protect privacy, the biases it has underscore the benefits. A proprietary model is very different from an open-source AI, where the source code is available to the public. Because the source code is publicly available, people can benefit from a global community of developers who continuously improve the software. In contrast, proprietary AI models provide a higher level of security, with companies developing and controlling these models internally to ensure data privacy. This creates a discussion about the trade-offs between the benefits of community-driven innovation and the need for data protection. The question is, which approach ultimately better balances innovation and privacy?

Sources:

[0] Image from: https://www.datanami.com/2018/10/16/do-amazons-biased-algorithms-spell-the-end-of-ai-in-hiring/

[1]https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/ 

Please rate this