AI Voice Cloning Scams: does AI always benefit?

20

September

2024

No ratings yet.

Artificial intelligence can benefit people in many ways: including health care, legal consultation, and education, but its applications are not always positive for everyone. According to recent news from British bank Starling, they warned that ‘millions’ of people could fall victim because of voice cloning.

Imagine that you post a five-second video on Instagram, but it is enough for the scammers to use AI to generate your voice. They will call your close family or friends to ask for money as your identity. Starling Bank surveyed with Mortar Research and it showed that 46% of respondents did not even know this kind of scam exists. 8% of people will do as the scammers said and transfer as much money as possible. More than one-fourth of people claimed that they had been targeted by this kind of scam during this year.

AI-generated voice can be made convincing by technology like OpenAI’s Voice Engine. It can be designed to enhance the quality of customer service. However good intentions do not always result in good outcomes. When the technology develops, criminals will also develop new ideas to use this technology in their favor.

Have you guys watched ‘Black Mirror’? In my opinion, as AI technology develops, it will penetrate and complicate people’s lives. There is an episode ‘Be Right Back’ about AI simulation for people’s dead loved ones. A girl’s boyfriend died in a car accident, and an AI company provides a service to simulate his voice, ways of thinking, and even body. They seem to live together just like before, but it is never the same person, right?

Question: Do you think the benefit of AI weighs more than its negative effects? How can we cooperate with challenges like this scam? How can we balance innovation with ethical problems?

Source : https://www.cnn.com/2024/09/18/tech/ai-voice-cloning-scam-warning/index.html

Please rate this

10 thoughts on “AI Voice Cloning Scams: does AI always benefit?”

  1. Hey, your post really highlights an important issue with how AI is evolving. AI brings so much value, but the risks are becoming harder to ignore. I think the benefits of AI can outweigh the negatives, but only if we’re proactive in managing those risks. The fact that voice cloning scams are now possible shows that we need to develop better safeguards as fast as AI evolves. To answer your first question: while AI has the potential to enhance many industries, such as you mentioned, I think the challenge lies in regulating its use without stifling innovation. We can’t expect technological progress to slow down, but we need stricter laws and better ethical guidelines to make sure it’s used responsibly. A solution to the voice scam you mentioned could be mandatory authentication checks or verification processes integrated into voice-related AI systems to prevent scams. Additionally, I’d like to add that I’m afraid regulations will only be implemented when it’s already too late and many people have already fallen victim to scams like these.

    1. Thank you for the insightful reply. I agree with you, and I really like what you said: ‘The challenge lies in regulating its use without stifling innovation’. Finding the balance is always the most difficult part.

  2. The issue you address in this blogpost is a very important one and one that needs to be addressed by both companies as well as governments. I must say, it is this negative side of AI that you describe that at first made me very skeptical about using GenAI at all. However, now that I learn more and more about the benefits it creates, by allowing us to use it to make much more efficient use of our time (as students, but also in working life), I believe the benefits weight out the potential problems. Still, this does not mean AI should not be held within boundaries, set by governments whose job it is to protect people from harm. You mention voice cloning used for fraud, another example of a very bad way of using AI is the phenomena of deep nudes. In The Netherlands, about two years ago, a documentary called ‘Welmoed en de Sexfakes’ (Welmoed en de Sexfakes, n.d.) was broadcasted. In this documentary a journalist named Welmoed dives into the world of fake porn, deep nudes and the dark side of GenAI, after she discovered that a sextape was made, featering her made with AI. I believe these negative sides of AI need to be prevented at all cost. However, we should not ban AI from our societies. AI can be used to very good ends, think about the use of AI in healthcare, providing more accurate diagnoses to name one. We need to support AI development and innovation, and at the same time we need well educated people to warn us about the negative sides of it, making new laws in time to be able to take action against bad usage of AI. With that course of action, I believe we can get the most out of the endless possibilities AI possesses.

    References:
    Welmoed en de sexfakes. (n.d.). npo.nl/npo3. https://npo.nl/npo3/welmoed-en-de-sexfakes/POW_05416189

    1. Thank you for the reply! The documentary you mentioned sounds appealing. I will watch it later! I like the idea of educating people about AI regulations as they are developing so fast that getting everyone to understand and use them properly is crucial.

  3. Dear Shanshan,

    You address an important and overlooked issue with the current development of artificial intelligence. Bottom line, I think it is clear that artificial intelligence has the potential to create way more value than these practices cost, since I believe this development could fall under the same category as the development of agriculture and the industrial revolution in terms of potential for the economy, as highlighted by Bostrom (2014) in his seminal book “Superintelligence”. Nevertheless, the negative impacts of several generative AI applications should be researched and addressed. In my opinion the best way to mitigate these threats are the limitation of open-sourcing synthetic voices or image/video generation once it hits certain benchmarks. Also, I think the SOTA generative AI should be subject to laws that entail watermarking, strict content filters and accountability for what is created with their software. Although these should remedy part of the issue that you have laid out, I think there is a big role for new startups and technologies to help discover these clones. Dutch startup DuckDuckGoose is a good example of this phenomena.

    In addition, I would like to add that regulators should be extremely careful regulating artificial intelligence, especially in the EU. We have been the policeman of the world for some time and that has led to good outcomes (e.g., usb c chargers) but the development in AI asks for a different approach. We need to ensure that the EU becomes a cradle for AI companies so that we do not miss out on imprinting our values on the AI technology. This technology will cause a landslide in the global theatre and does not respect borders. Whatever the USA or China will create in this field, will eventually come to Europe, regardless of regulations. Hence, the main premise of EU policy should be: Innovate, not over-regulate.

    References

    – Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.

    1. Thank you for the reply. Yes, I believe AI regulation should be a worldwide issue; people should have a common sense about the ethical boundaries of AI applications. And let us hope it to be a better world with AI!

  4. Dear Shanshan,

    You address an important and overlooked issue with the current development of artificial intelligence. Bottom line, I think it is clear that artificial intelligence has the potential to create way more value than these practices cost, since I believe this development could fall under the same category as the development of agriculture and the industrial revolution in terms of potential for the economy, as highlighted by Bostrom (2014) in his seminal book “Superintelligence”. Nevertheless, the negative impacts of several generative AI applications should be researched and addressed. In my opinion the best way to mitigate these threats are the limitation of open-sourcing synthetic voices or image/video generation once it hits certain benchmarks. Also, I think the SOTA generative AI should be subject to laws that entail watermarking, strict content filters and accountability for what is created with their software. Although these should remedy part of the issue that you have laid out, I think there is a big role for new startups and technologies to help discover these clones. Dutch startup DuckDuckGoose is a good example of this phenomena.

    In addition, I would like to add that regulators should be extremely careful regulating artificial intelligence, especially in the EU. We have been the policeman of the world for some time and that has led to good outcomes (e.g., usb c chargers) but the development in AI asks for a different approach. We need to ensure that the EU becomes a cradle for AI companies so that we do not miss out on imprinting our values on the AI technology. This technology will cause a landslide in the global theatre and does not respect borders. Whatever the USA or China will create in this field, will eventually come to Europe, regardless of regulations. Hence, the main premise of EU policy should be: Innovate, not over-regulate.

    References

    – Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies.

  5. First of all, I want to praise the student’s choice to highlight the other perspective of the spectrum around AI implementation. We can read a lot of blogs written about the benefits of implementing AI in a lot of different fields, but the author has chosen to follow another route, which makes the point of view form a discussion point against all the other blogs.

    The author has succeeded to hold the attention of the reader and trigger the reader with questions about the subject to critically think about. Moreover the author zoomed in on a specific potential negative consequence: crime enabled by AI and gave the reader a clear idea of the concept and the effects of it.

    However, I had hoped to read more academically supported potential risks of the future use of these technologies in the field of crime other than the already applied voice duplication. Even well, as that I would have hoped to read the defence on how the AI technology can be used to address crime. For instance, the research of Thuy en Hieu discussed all the existing knowledge about the use of AI for crime, but also the advantages and disadvantages of the use of AI to prevent and fight crime (2020). For a matter of fact, I read a recent published report by Europol about how the implementation of AI can improve law enforcement operations drastically (2024).

    Nevertheless, the author inspired me to do more research on the subject and thus achieved the objective of

    Thuy, N., & Hieu, N. (2020). Developing Artificial Intelligence in Fighting, Preventing and Combating the Digital Business Crimes. Atlantis Press. https://doi.org/10.2991/aebmr.k.200127.090

    How AI Can Strengthen Law Enforcement: Insights from Europol’s New Report | Europol. (z.d.). Europol. https://www.europol.europa.eu/media-press/newsroom/news/how-ai-can-strengthen-law-enforcement-insights-europols-new-report

  6. First of all, I want to praise the student’s choice to highlight the other perspective of the spectrum around AI implementation. We can read a lot of blogs written about the benefits of implementing AI in a lot of different fields, but the author has chosen to follow another route, which makes the point of view form a discussion point against all the other blogs.

    The author has succeeded to hold the attention of the reader and trigger the reader with questions about the subject to critically think about. Moreover the author zoomed in on a specific potential negative consequence: crime enabled by AI and gave the reader a clear idea of the concept and the effects of it.

    However, I had hoped to read more academically supported potential risks of the future use of these technologies in the field of crime other than the already applied voice duplication. Even well, as that I would have hoped to read the defence on how the AI technology can be used to address crime. For instance, the research of Thuy en Hieu discussed all the existing knowledge about the use of AI for crime, but also the advantages and disadvantages of the use of AI to prevent and fight crime (2020). For a matter of fact, I read a recent published report by Europol about how the implementation of AI can improve law enforcement operations drastically (2024).

    Nevertheless, the author inspired me to do more research on the subject and thus achieved the objective of encouraging discussions from others!

    Thuy, N., & Hieu, N. (2020). Developing Artificial Intelligence in Fighting, Preventing and Combating the Digital Business Crimes. Atlantis Press. https://doi.org/10.2991/aebmr.k.200127.090

    How AI Can Strengthen Law Enforcement: Insights from Europol’s New Report | Europol. (z.d.). Europol. https://www.europol.europa.eu/media-press/newsroom/news/how-ai-can-strengthen-law-enforcement-insights-europols-new-report

    1. Thank you for the informative reply. And it is quite a unique angle you brought up with how AI can be used to address AI crime. It sounds like a circle, but ultimately, it is all about human choice. Thank you for the useful reference. I will check them out!

Leave a Reply

Your email address will not be published. Required fields are marked *