Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?

16

October

2023

No ratings yet.

As an occasional K-pop listener, I came across this new sensation taken to the K-pop music industry: a virtual K-pop girl group named Mave, backed by tech giant Kakao (Reuters & Reuters, 2023). This girl group solely exist in the metaverse of blurring lines between virtual and reality. It is absurd for me to find out that in less than two months, their debut single, “Pandora”, has nearly reached 20 million views (Hoesan & Nuraeni, 2023).

Mave consists of four virtual members: Siu, Zena, Tyra, and Marty. Like the typical K-pop groups, they produce music videos, interviews, and stage performances developed through web designers and artificial intelligence (AI) (Reuters & Reuters, 2023; Hoesan & Nuraeni, 2023). Moreover, each member brings a distinctive style and expression to their performances. Furthermore, every member has a designated role within the group, and their profiles include details such as their birthdays, zodiac signs, and even nationalities. Another aspect that intrigued me is the ambiguity of their appearance as more human or virtual characters. What’s more, they can break language barriers by using an AI voice generator to speak Korean, English, French, and Bahasa (Jeong, 2023).

However, let’s break down the consumerism aspect behind the K-pop industry to analyze this new phenomenon. K-pop is renowned for its parasocial relationships, where fans interact and communicate with their idols through various means, such as live streams, social media, and fan communities (Jeong, 2023; Hoesan & Nuraeni, 2023). In addition, fans’ close connection to artists motivates them to support their idols through music streaming, merchandise purchases, and attending concerts (Jeong, 2023; Hoesan & Nuraeni, 2023). A strong emotional connection and fan-artist interaction have been crucial in creating dedicated fan bases and driving the consumption of K-pop products and services (Introducing Korean Popular Culture, n.d.).

In my personal experience of being a fan of some K-pop groups, I can resonate with the strong emotional connection with the artists, mainly because the human qualities are what the virtual K-pop group is missing, such as their hard work, self-made music, talents and personal & career development. In this case, Mave holds challenges in authentic fan-artist interaction, such as directly engaging with fans. This could lead to disapproval and lack of intention to become fans or even listeners of the group for some audiences despite the music still aligning with the fans’ preferences.

Despite the challenges these virtual K-pop groups face, it remains an innovative concept of bridging the gap between virtual and real, offering a new form of entertainment and engagement for the fans in the K-pop domain. Yet, my answer to the question of “Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?” would, for now, be negative.

References:

Introducing Korean popular culture. (n.d.). Google Books. https://books.google.nl/books?hl=en&lr=&id=sRO8EAAAQBAJ&oi=fnd&pg=PA1957&dq=K-pop+label+companies+capitalize+on+this+fan+engagement,+turning+it+into+a+significant+revenue+source+through+official+merchandise,+subscriptions+on+communication+platforms+that+allow+direct+interaction+with+artists,+and+paid+fan+memberships+with+exclusive+benefits.+&ots=jBpjhoNHF4&sig=Bjn74lpn8r4sI1TEvBFQwvUIZhI&redir_esc=y#v=onepage&q&f=false

Hoesan, V., & Nuraeni, S. (2023). Factors Influencing Identification as a Fan and Consumerism towards The Virtual K-Pop Group MAVE: Journal of Consumer Studies and Applied Marketing, 1(2), 109–116. https://doi.org/10.58229/jcsam.v1i2.72

Jeong, M. (2023). What makes “aespa”, the first metaverse girl group in the K-pop universe, succeed in the global entertainment industry? https://www.econstor.eu/handle/10419/277980

Reuters & Reuters. (2023, March 17). Meet Mave:, the AI-powered K-pop girl group that look almost human and speak four languages. South China Morning Post. https://www.scmp.com/lifestyle/entertainment/article/3213720/meet-mave-ai-powered-k-pop-girl-group-look-almost-human-and-speak-four-languages

Please rate this

Personal Trainer with AI Coach in Presentation Skills

29

September

2023

5/5 (1)

Over the past few years, technological advancements in speech processing have drastically transformed how humans engage with digital devices (Yu, 2016). These developments have paved the way for rapid progress in voice recognition technology, which, in turn, has opened doors for integrating AI into speech training. Particularly, AI-driven speech training has proven effective in enhancing students’ presentation abilities (Junaidi, 2020). Additionally, several studies have shown that due to the fear of speaking in public, individuals often experience such significant anxiety when delivering oral presentations that it can potentially impact their mental health and overall well-being (Grieve, 2021).

During the previous years of the author’s internship at an Amsterdam start-up soft skills training organisation, Lepaya, an opportunity was offered to try out the so-called “AI Coach.” AI Coach is a virtual platform on the mobile app for learners to acquire communication skills effectively with “Machine Based eLearning (MABEL)” (Hoelzer, 2022).

This AI-driven method of Learning and Development (L&D) employs Machine Learning Algorithms on various data types like videos, audio, and text to aid users in improving their conversational abilities in practical scenarios (AI Skills of the Future: Understand AI and Make It Work for You, n.d.). Their process involves collecting practice videos, analysing them using AL systems to extract key speech and conversation indicators such as gestures, facial expressions, and voice, and then providing feedback to users for improvement (Hoelzer, 2022).

The pipeline comprises several steps, beginning with videos being collected internally or through the app developed in Flutter (Hoelzer, 2022). Next, videos are processed using MABEL API, which analyses video, sounds, and text using Python and Docker within Sagemaker on AWS and machine learning libraries like TensorFlow, PyTorch, and scikit-learn (Hoelzer, 2022).

Afterwards, data is collected to transform into datasets, with Luigi used to track transformations and ensure reproducibility. Then, annotated datasets are crucial for training machine learning models using LabelStudio, covering aspects like filler words, gestures, facial expressions, and overall presentation ratings. Next, machine learning models are developed based on the annotated datasets, including audio models (e.g., filler word detection), video models (e.g., human keypoint detection. Emotion classification), and regular models (to provide a presentation rating). Tools like Melflow are used to manage experiments. Lastly, after quality assurance checks, the updated MABEL pipeline with the new models is delayed (Hoelzer, 2022).

In conclusion, this comprehensive approach, which combines generative AI and effective training methods, represents a major leap forward in communication skills development.

References:

AI Skills of the Future: Understand AI and Make it Work for You. (n.d.). https://www.lepaya.com/blog/ai-skills-of-the-future

Grieve, R., Woodley, J., Hunt, S. E., & McKay, A. (2021). Student fears of oral presentations and public speaking in higher education: a qualitative survey. Journal of Further and Higher Education45(9), 1281-1293.

Hoelzer, T. (2022, November 7). MABEL — How we build AI at Lepaya Tech – Lepaya Tech – Medium. Medium. https://medium.com/lepaya-tech/mabel-how-we-build-ai-at-lepaya-tech-2ed6c806a23c

Junaidi, J. (2020). Artificial intelligence in EFL context: rising students’ speaking performance with Lyra virtual assistance. International Journal of Advanced Science and Technology Rehabilitation29(5), 6735-6741.

Please rate this