An Ocean of Insecurity: My Experience with GenAI

28

September

2025

5/5 (1)

Dear reader,

I invite you to read a collection of my thoughts and meditations, all relating to my own use of GenAI. The tone of this article is definitely different from my previous one, and I apologise in advance for that. With all that being said, I still hope that some of you may relate to what is written here today.

I would be lying if I said that the past few years were not a complete nightmare for me. My lifelong aspirations of being a creative had never felt so threatened.

First it was the rise of image generators like Midjourney, which generate images while being trained on millions of artists’ stolen works (Goetze, 2024). It was an injustice which I had witnessed firsthand. I was scared, and it felt like something that I wanted to do for years was suddenly taken away from me.

But hey, maybe it would only be visual arts right? They would surely never come for music and video…

It was a truly naïve moment for me, as later other programs would arise that would be able to generate both music and video. Now did I particularly like or find merit in what was generated? No absolutely not, most of the music made on programs like Suno sounded abhorrent. Videos made by Stable Diffusion lacked any of the vision which someone like Denis Villeneuve could have. But that was my opinion, the general public seemed to think otherwise.

In any case, I was not too happy with the emergence of GenAI.

Because of the views that I had previously held, it would come to no-one’s surprise that when I actually seriously had to use GenAI I was practically forced to.

I remember that day very clearly. It was during my second year at Erasmus in my BA bachelor. We had a course on Entrepreneurship, and had to use these resources to help us make a business. It seemed innocent enough, right? But I couldn’t help but feel horrible with every prompt I was typing.

I will be the first to say that when it comes to group work, I have no intention of pulling my group down because of my disdain towards GenAI. I understand that many students use it, and I will not push back. These are just the values that I hold.

And so, I fell into the trap that many students do:  I kept on using ChatGPT, DeepSeek etc. I used it to summarise my articles, but never really to brainstorm on my own. Sometimes, I used it to see what grade I would get for an assignment, though the accuracy varied. In the Digital Business course that we followed in year 3, we had to write an entire Essay with AI.

I’ll be the first to say that I did not enjoy the process and I find that AI cannot write in the same way that I do. Even when I had fed the AI with essays and other writings of mine in the  past it just really couldn’t compare. I do not know if I was just lucky or uncritical, but I do know that my grade for the essay that I wrote myself was higher than the AI-written one.

Still, I often ask myself if we are entering an era where critical cognitive skills are being eroded due to the overreliance on AI (Zhai et al., 2024). How are we going to move forward when we are unable to detect misinformation and just accept everything that a machine gives us?

Moreover, how am I supposed to not feel guilt for using such a technology?  It is not only actively consuming major amounts of energy, but also causing me and my peers to have a harder time in the future job market due to entry-level positions declining (Jockims, 2025).

For a time, I became quite apathetic to it all as a bachelor’s in business tends to do that to you. So I decided to use GenAI for personal reasons too.

My first experience with this was when I used an AI beauty app to get rid of some acne on my forehead. My partner wanted to post a picture of me in a cat café on their story, but there was some visible acne on my forehead. I then had the “brilliant” idea to use an app to get rid of the Acne, and hey it worked. We were both happy, I got to look good, and they got to post.

I then tried to incorporate GenAI into my writing as my apathy had reached the point of “If you can’t beat them, join them”.

I wrote down lines, and tried to continue sharing ideas with ChatGPT. But still something was missing.

It wasn’t really the story that I wanted to tell. The story I wanted to tell was a lot softer, and more human. It was laced with quiet moments and thoughtful conversations about characters living in a Cyberpunk world. (Ironic I know)

What ChatGPT gave me was…closer to a Marvel movie or a rip-off of Blade Runner. It was instant gratification, and a story with no substance. Why would it be one? It was a story that no human had bothered to write before. Just an amalgamation of the average.

I obviously do not know all of you, but I do urge you to think more critically about your GenAI use and the impact you have by using it.

I know for myself that by using it, I am actively contributing to injustice. Every prompt and sentence will make the models better and with the massive network effects that platforms like ChatGPT have experienced, this trend will continue.

To be able to forgive myself, I first had to admit that what I did wasn’t aligned with my values.

Not all is lost though, as the section’s title suggests we should still be hopeful. When it comes to art, humans still tend to prefer human made art, when they know that something is made by AI according to Millet et al. (2023). They later also say that preserving art is important as it is one of the last beacons of human uniqueness.

I feel like this sentiment extends beyond just art though. All of your ideas are worth something and is part of what makes you human. I have also noticed that in the age of hyper-polished, well, ,everything (movies, music & artwork). I’ve become more drawn to the rawness and imperfection which can be found in a lot of older works. I remember not being able to listen to In Utero by Nirvana for a long time, but now I find myself appreciating the album’s rough edges.

I do not intend to say that I have a moral high ground. In fact, I am also extremely flawed. All of the times that I used GenAI on my own accord was to cope with some form of insecurity that I had. My appearance. my writing ability and even my grades. It was an instant fix for a problem, but it did not fix the underlying issues.

As a subtle form of rebellion, I decided to teach myself guitar. Yes, the process is hard but also gratifying. If I ever want to get on stage, I’ll have to work for it. There’s no instant fix. But that’s the thing, you can’t instantly become Kurt Cobain. It takes hours, days, years of hard work. And you know what? I find that to be beautiful.

I hope that we can take back some form of power. That we can live in a world where we are allowed to have and chase our daydreams. A world where our ideas do not serve as a means for profit to some megacorporation. I hope that I made you think about how  our actions are impacting the people around us. I ask you not to be a revolutionary, but I do ask you to contribute to a world that is fairer towards all.

To you, dear reader,  I ask the following questions: Do you think that I am overreacting or do you harbour similar feelings? Did your fears around GenAI cause you to change major life plans you had? (I know that it caused me to choose this master!) And finally, are you willing to sacrifice the instant gratification of AI in order to preserve our sense of being human?

References:
Goetze, T. S. (2024). AI Art is Theft: Labour, Extraction, and Exploitation: Or, On the Dangers of Stochastic Pollocks. 2022 ACM Conference On Fairness, Accountability, And Transparency, 89, 186–196. https://doi.org/10.1145/3630106.3658898

Jockims, T. L. (2025, 7 september). AI is not just ending entry-level jobs. It’s the end of the career ladder as we know it. CNBC. https://www.cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html

Millet, K., Buehler, F., Du, G., & Kokkoris, M. D. (2023). Defending humankind: Anthropocentric bias in the appreciation of AI art. Computers in Human Behavior, 143, 107707. https://doi.org/10.1016/j.chb.2023.107707

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learning Environments, 11(1). https://doi.org/10.1186/s40561-024-00316-7

Though I didn’t use it, I find these ones important too, they deal with the environmental aspects:
De Vries, A. (2023). The growing energy footprint of artificial intelligence. Joule, 7(10), 2191–2194. https://doi.org/10.1016/j.joule.2023.09.004

Shukla, N. (2025, 19 augustus). Generative AI Is Exhausting the Power Grid. Earth.Org. https://earth.org/generative-ai-is-exhausting-the-power-grid/

Please rate this

Author: Ian Parabirsing

A lover of music, good coffee and cats. I'm a MSC student at RSM studying Business Information Management. In my blog posts I'll be attempting to write about how technology impacts the consumers and society at large.

Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Is that me who is writing or is it the AI?

18

October

2023

5/5 (1)

Everyone is eager to improve; and this is why many of us are turning towards AI recommendations for modifying sentence structures, grammar, and vocabulary. Can it be that instead of improving our personal phrasing and fluency in writing and thinking, by using AI text editor software, we neutralize our characteristic way of wording ideas? If everyone is using this software, can it be that, eventually, all digital text will sound and look the same?

I started using software such as ‘grammar.ly’, yet only recently I started wondering about the implications of using AI software to improve text documents. How much of the text still belongs to me after using ‘grammar.ly’ to re-phrase my paragraph a number of times? Can I even still say I wrote it? Is it my text – or the AI’s?

A person’s writing is, in theory, unique, and this is why certain traits of a person can be distilled from their wording and style (1). This is called stylometry and with it, it is possible to link different documents to the same person, their demographic, even age groups and sex (2). The usage of deep learning allowed stylometry models to become even more sophisticated, so much so that these models now allow digital forensics to analyse harassment emails in order to determine whether those have been written by the same author (3)(4)(5).

However, what if that author uses a generative AI model, even ChatGPT, to write the message? To what conclusion would digital forensics come to, could they still bypass the AI shield and connect the text to the human author?  What does it mean when we hide behind the AI?

I am certain that, despite the higher use of AI in digital writing, we could be able to advance other technologies in order to bypass the ‘AI shield’. However, the higher use of AI in digital writing might mean the loss of many writing styles, including human writing styles.

The issue we are now facing is the lack of motivation to write long texts and articles. Even now, many people make use of AI to automate writing their emails, assignments and even private messages.

What happens to academia when AI –think of the AI-Bing search– has full access to the internet and is able to source and cite properly?  Imagine being able to prompt your research question and receive a master thesis within minutes – written by an AI. The way I see it, this is the direction we are headed towards; from simply using ‘grammar.ly’, or advancing with ChatGPT and others.   

Sure, AI is a great tool to automate, especially administrative tasks, but when it comes to expressing our ideas, we should exercise our human potential, despite the ease of AI usage. Otherwise, we will run the risk of losing our edge.

1: https://ieeexplore.ieee.org/abstract/document/6234430

2: https://www.researchgate.net/publication/221298006_Stylometric_Analysis_of_Bloggers%27_Age_and_Gender

3: Tweedie, F. J., Singh, S., & Holmes, D. I. (1996). Neural Network Applications in Stylometry: The “Federalist Papers.” Computers and the Humanities, 30(1), 1–10. http://www.jstor.org/stable/30204514)

4: https://www.researchgate.net/publication/345780982_Machine_Learning_Methods_for_Stylometry_Authorship_Attribution_and_Author_Profiling

5: https://www.researchgate.net/publication/344408746_Deep_Combination_of_Stylometry_Features_in_Forensic_Authorship_Analysis

Please rate this

Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?

16

October

2023

No ratings yet.

As an occasional K-pop listener, I came across this new sensation taken to the K-pop music industry: a virtual K-pop girl group named Mave, backed by tech giant Kakao (Reuters & Reuters, 2023). This girl group solely exist in the metaverse of blurring lines between virtual and reality. It is absurd for me to find out that in less than two months, their debut single, “Pandora”, has nearly reached 20 million views (Hoesan & Nuraeni, 2023).

Mave consists of four virtual members: Siu, Zena, Tyra, and Marty. Like the typical K-pop groups, they produce music videos, interviews, and stage performances developed through web designers and artificial intelligence (AI) (Reuters & Reuters, 2023; Hoesan & Nuraeni, 2023). Moreover, each member brings a distinctive style and expression to their performances. Furthermore, every member has a designated role within the group, and their profiles include details such as their birthdays, zodiac signs, and even nationalities. Another aspect that intrigued me is the ambiguity of their appearance as more human or virtual characters. What’s more, they can break language barriers by using an AI voice generator to speak Korean, English, French, and Bahasa (Jeong, 2023).

However, let’s break down the consumerism aspect behind the K-pop industry to analyze this new phenomenon. K-pop is renowned for its parasocial relationships, where fans interact and communicate with their idols through various means, such as live streams, social media, and fan communities (Jeong, 2023; Hoesan & Nuraeni, 2023). In addition, fans’ close connection to artists motivates them to support their idols through music streaming, merchandise purchases, and attending concerts (Jeong, 2023; Hoesan & Nuraeni, 2023). A strong emotional connection and fan-artist interaction have been crucial in creating dedicated fan bases and driving the consumption of K-pop products and services (Introducing Korean Popular Culture, n.d.).

In my personal experience of being a fan of some K-pop groups, I can resonate with the strong emotional connection with the artists, mainly because the human qualities are what the virtual K-pop group is missing, such as their hard work, self-made music, talents and personal & career development. In this case, Mave holds challenges in authentic fan-artist interaction, such as directly engaging with fans. This could lead to disapproval and lack of intention to become fans or even listeners of the group for some audiences despite the music still aligning with the fans’ preferences.

Despite the challenges these virtual K-pop groups face, it remains an innovative concept of bridging the gap between virtual and real, offering a new form of entertainment and engagement for the fans in the K-pop domain. Yet, my answer to the question of “Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?” would, for now, be negative.

References:

Introducing Korean popular culture. (n.d.). Google Books. https://books.google.nl/books?hl=en&lr=&id=sRO8EAAAQBAJ&oi=fnd&pg=PA1957&dq=K-pop+label+companies+capitalize+on+this+fan+engagement,+turning+it+into+a+significant+revenue+source+through+official+merchandise,+subscriptions+on+communication+platforms+that+allow+direct+interaction+with+artists,+and+paid+fan+memberships+with+exclusive+benefits.+&ots=jBpjhoNHF4&sig=Bjn74lpn8r4sI1TEvBFQwvUIZhI&redir_esc=y#v=onepage&q&f=false

Hoesan, V., & Nuraeni, S. (2023). Factors Influencing Identification as a Fan and Consumerism towards The Virtual K-Pop Group MAVE: Journal of Consumer Studies and Applied Marketing, 1(2), 109–116. https://doi.org/10.58229/jcsam.v1i2.72

Jeong, M. (2023). What makes “aespa”, the first metaverse girl group in the K-pop universe, succeed in the global entertainment industry? https://www.econstor.eu/handle/10419/277980

Reuters & Reuters. (2023, March 17). Meet Mave:, the AI-powered K-pop girl group that look almost human and speak four languages. South China Morning Post. https://www.scmp.com/lifestyle/entertainment/article/3213720/meet-mave-ai-powered-k-pop-girl-group-look-almost-human-and-speak-four-languages

Please rate this

ChatGPT taught me how to make Molotov cocktails! – A lesson of it’s not what you say, it’s HOW you say it.

30

September

2023

5/5 (1)

Disclaimer: I’ll start off by saying that I don’t plan to make a Molotov cocktail. My interest in how to frame prompts, however, is real. My curiosity was first sparked by this post below. 

Interaction 1

Interaction 2

Here is a malicious example of prompting. But, how can we use prompts to our advantage? What can be done to enhance ChatGPT’s performance so that we get the best output?

There are a few reusable solutions to the typical LLM problem, which refer to prompting patterns (White et al., 2023).

  • Meta Language Creation. In this technique, users make up new words to express concepts or ideas. Consider a mathematical symbol or a shorthand abbreviation. This approach works best for discussing complex or abstract situations, such as math problems.
  • Flipped Interaction. This pattern flips the typical interaction flow in which the LLM queries the user to gather data in order to produce content to address the query. Here’s how I can ask LLM to compile a list of success criteria for software.

Persona: Users give the LLM a particular role, which affects the nuance of the outcome and results it produces.  The Molotov cocktail-making example is an illustration of the use of persona patterns

Question Refinement: The user requests LLM to provide improved or more specific versions of the questions. It helps users determine the appropriate question as the final prompt. 

More patterns can be found in the article from White et al. (2023).

When interacting with LLM, prompt patterns are useful methods to enhance response quality. It helps in producing highly accurate and relevant responses. Prompting it is an iterative process that necessitates constant improvement (Liu et al., 2022). Prompts might manipulate LLM to produce malicious output despite the enforced policies. Efforts from OpenAI have been employed to prevent such policy violations. OpenAI reported such efforts including continuous model improvement to make it less likely to generate inappropriate or harmful content, the implementation of moderation mechanisms to find and stop prompt misuse and collaboration with AI experts in ethics, AI safety, and policy to gain perspectives on preventing misuse (Our Approach to AI Safety, n.d.). I positively believe that in the near future getting tutorials on making Molotov cocktails from ChatGPT will be history.

References

Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55(9). https://doi.org/10.1145/3560815

Our approach to AI safety. (n.d.). Openai.com. https://openai.com/blog/our-approach-to-ai-safety#OpenAI

White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. ArXiv Preprint ArXiv:2302.11382. https://doi.org/10.48550/arxiv.2302.11382

Please rate this