The growth of generative voice artificial intelligence (AI) is one of the most exciting yet troubling phenomena in an era dominated by technical advancements. The way we engage with devices and information has been dramatically changed by this cutting-edge technology, but it has also brought up several very important issues. I’ll go into more detail about some of the main issues and moral dilemmas posed by generative voice artificial intelligence in this article.
For instance, the software program Speechify was created to turn text into spoken speech. Long texts can be adjusted and converted, as well as chosen in a variety of voices, using the software’s customizable features (Weitzman, 2023). I could record my own voice and have the program read any material, making it appear as though I were the one reading it. I initially didn’t have great expectations for this software, but I was shocked by how much it sounded like myself. I should note that since I didn’t have the paid edition, I was unable to use American English and there are more voice AI generators that are more professional. However, I think the voice is fairly like mine, despite being a little crowded.
Although this technology is revolutionizing the way we connect with things and communicate, it has also caused me to have some serious ethical questions. The possibility to clone voices is one of the most evident worries (Eliot, 2022). With the aid of this technology, one can produce audio clips that sound authentic. A recent instance that stands out is the time Stephen Fry’s voice was appropriated for a voice-over in a documentary. It turned out to be an artificial voice that had picked up Fry’s characteristic speech pattern from the Harry Potter audiobooks he had narrated. Fry’s response to this was “They can make me say anything” the statement reads. “From a command to storm the parliament to dubbing hardcore pornography, all without my knowledge and without my consent“(Duggins, 2023). This indicates that false information is being disseminated, political people are being impersonated, or even alarmingly accurate fake news is being produced. But it also makes me wonder about my day-to-day existence. I believe this to be a scary thought, especially when it comes to parents and children, as it might allow a criminal to trick a youngster or pressure them using a voice that sounds like their parents. I can easily believe that kids would find it harder to tell if the voice is artificially made, which is already a difficult task for adults.
New moral issues are raised as a result of these advancements (Cox, 2019). According to me, it’s unclear how user data is gathered on these platforms, which raises questions regarding openness and important issues. Companies have an obligation to make their customers aware of the dangers of data sharing. Who “owns” the voice once it has been recorded is another issue. As these speech recordings are susceptible to hacking and misuse, which might have dire repercussions, I also question how businesses can secure their safety and security.
Clearer regulation and adherence to laws and regulations are urgently needed in light of the voice AI technology’s expanding significance. For example, the California AI Act was created to restrict the government’s use of generative AI technology, and federal institutions like the Department of Energy are suggested as ideal recipients for funding for AI-related research that is not geared toward defense (Ciccarelli & Ciccarelli, 2023). To deal with all issues that arise when working with speech recognition technologies, however, I believe that existing legislation may not be adequate given the particular difficulties this kind of technology presents.
It’s crucial that we take these ethical concerns seriously and have conversations about them in a time when voice artificial intelligence is becoming more and more prevalent in our daily lives. What steps can people take to safeguard their privacy in a world where voice artificial intelligence is becoming more commonplace? Do you foresee any further potential exploitation scenarios, or are you intrigued by the inventive possibilities? Please let me know!
Sources
Ciccarelli, D., & Ciccarelli, D. (2023). Are AI voices legal? Voices. https://www.voices.com/blog/ai-voices-legal/#:~:text=AI%20voices%20are%20subject%20to,being%20used%20for%20legitimate%20purposes.
Cox, T. (2019, 20 mei). The ethics of smart devices that analyze how we speak. Harvard Business Review. https://hbr.org/2019/05/the-ethics-of-smart-devices-that-analyze-how-we-speak
Duggins, A. (2023, 20 september). ‘It could have me read porn’: Stephen Fry shocked by AI cloning of his voice in documentary. the Guardian. https://www.theguardian.com/technology/2023/sep/20/it-could-have-me-read-porn-stephen-fry-shocked-by-ai-cloning-of-his-voice-in-documentary
Eliot, L. (2022, 2 juli). AI ethics starkly questioning human voice cloning such as those of your deceased relatives, intended for use in AI autonomous systems. Forbes. https://www.forbes.com/sites/lanceeliot/2022/07/02/ai-ethics-starkly-questioning-human-voice-cloning-such-as-those-of-your-deceased-relatives-intended-for-use-in-ai-autonomous-systems/?sh=73391d884882
Weitzman, T. (2023, 28 juni). How does Speechify work? 🚀 Speechify. Speechify. https://speechify.com/blog/how-does-speechify-work/?landing_url=https%3A%2F%2Fspeechify.com%2Fabout%2F&source=fb-for-mobile&gclid=EAIaIQobChMIttj62vjbgQMVUA4GAB07vwzMEAAYASAAEgK_rfD_BwE&via=speechify-ai