Are We Living Smarter or Thinking Less with AI?

30

September

2025

5/5 (1)

With the launch of the beta version of ChatGPT in November 2022, my interactions with Generative AI moved from something new and unprecedented to becoming a constant companion in my everyday life. Now, almost three years later, GenAI is deeply rooted in my professional and personal environments.

In my work environment, our company’s internal GPT “vally” acts almost like a digital co-consultant: drafting emails, enhancing client-facing material, and even simulating scenarios that would otherwise take me forever to do. Vally accelerates my workflows and enhances quality. At the same time, I am raising myself the question: Am I improving my own skills, or am I simply outsourcing part of my thinking process to an algorithm that never sleeps?

At home, the interactions are far less formal but no less present. Just the other day, I felt like making pancakes but had no scale and only a random mix of leftovers in the fridge. Instead of calling my mom, I asked ChatGPT for a recipe using only basic ingredients and a cup for measurement. A couple of minutes later, I was eating delicious pancakes. It might seem trivial, but moments like this show how naturally I hand over some of the smallest everyday decisions to AI.

Within my studies, Generative AI has taken on yet another role. It serves as a discussion partner, challenging my reasoning and summarizing complex readings. Rather than replacing critical thinking, it often sharpens it. By comparing my arguments to AI-generated ones, I am forced to refine my perspective and can develop it further. Still, the danger of overreliance is real. If every academic idea is first filtered through AI, does my originality become diluted?

This dilemma between support and dependence is the key concept of my reflection. Generative AI is not only a tool for productivity but also shapes how we think, learn, and make decisions. It is both enabling and limiting. Enabling, because it gives me faster access to knowledge and creativity. Simultaneously, limiting, because it tempts me to surrender intellectual effort to convenience.

The broader challenge for us, students, professionals, and individuals, is not whether AI will continue to improve. It will! The real question is whether we can develop the awareness and discipline to use it as a tool without letting it take away our ability to think and decide independently. The fate is now in our hands: How dependent do we want to become on machines as a daily companion?

References
Saager, M. (2025). Vally – our valantic custom AI Assistant. valantic. https://www.valantic.com/en/generative-ai/ai-assistant/

Please rate this

The Fun and Frustration of Using AI to Generate Weekly Cartoons

30

September

2025

No ratings yet.

Generative AI tools are hot. Nowadays, it seems like there’s a generative AI tool for almost every application. Websites can be built, songs can be created, software can be developed, and images can be generated. The last one is one of my favourites.

I’m an active member of a, what we call it in The Netherlands, “dispuut” (comparable with a fraternity) of a student association. Every Wednesday, I send a WhatsApp message to all active members of my dispuut, to inform them about the drinks of the coming week, accompanied by a funny anecdote of the past week.

A couple of weeks after the sending of these messages became my responsibility, I started experimenting with adding cartoon pictures, created by Chat-GPT, showing what’s coming or what has happened in a funny format, so that people would have a laugh. I added an example of these as the featured image of this blog post.

Since the moment I started doing this, a month ago, I’ve noticed a couple of things. On a positive note, the images are generated in the same style every time. Because of this, I can create a certain style of images that I want to create week in, week out. But on a negative note, Chat-GPT doesn’t always implement your feedback, even if it tells you that it did. And sometimes, it fails to explain why the image generation request couldn’t be followed up with generating an image. It’d be nice if there were a clear and concise explanation of why it wasn’t possible to generate the image, so that the request could be changed appropriately.

In conclusion, the possibility of having pictures generated by Chat-GPT is very fun, but it doesn’t always work out the way you want to.

Please rate this

Is My Relationship With ChatGPT Becoming a Love-Hate One?

30

September

2025

No ratings yet.

Since I have started my Bachelor programme in 2022, ChatGPT quickly became a part of my personal and student-life. As a (some would say ‘typical’) Gen-z’er, I started experimenting with the tool and mastered it quickly. I found out soon in my student years that it is a great tool for studying, as it helps me to solve complicated math equations, generate business ideas for group projects and give me more in-depth explanations of theories discussed in lectures. The biggest advantage of all is the time I get to save.

I can confidently say, I am grateful I study in this technological era as it makes the process of studying much more efficient.

However, the very attributes that draw students to ChatGPT, such as Speed, accessibility, and convenience, also bring up significant issues. Hasanein en Sobaih (2023) state that GenAI can bring negative aspects for students including overreliance, lack of quality and accuracy, and draining of the student skill set. These conclusions propose that, when ChatGPT can support study activities, there is also a chance that it will deter students from using more complex cognitive processes

These findings make me reconsider the use of AI in my everyday learning. On the one hand, I am excited to continue using the tool, as saves me a lot of time and improves my academic performance. However, I also want to further develop my cognitive skills, which I fear may deteriorate when using too much GenAI tools. To decide whether to continue incorporating ChatGPT into our learning strategies or return to more traditional study methods, I would like to initiate a discussion. The difficulty ultimately lives in finding a balance between the importance of building your own critical thinking and the advantages that AI brings.

Please rate this

When a Legendary Artist Lets AI Sing: The Blurred Boundaries of Music Production and Controversy

30

September

2025

No ratings yet.

Kanye West, the renowned hip-hop musician, rapper, and 24-time Grammy Award winner, released a music video for his new album “Bully” in 2025, featuring a significant portion of the album’s vocals generated using artificial intelligence. The project was initially teased as having AI elements to make music production easier. However, Kanye later expressed regret, stating he “hated” AI and that the album was unfinished, though it still contained AI vocals.

A typical and quickly recognized example of this is the song on the album, in which Kanye, who had never spoken Spanish publicly before and was generally believed to be unable to speak Spanish, sang the entire melody in authentic Spanish. While the song is beautiful, it also sparked heated discussion within the industry,andt he use of AI in the BULLY has sparked strong debate among fans and critics, with some appreciating the experimental aspects while others criticizing the quality and controversial nature of its production.

Kanye’s fans believe this is a legitimate creative tool for the legendary artist. Even when using AI to create music, the AI ​​needs to be given clear and explicit instructions to bring the artist’s vision to life, even allowing it to incorporate elements previously unattainable by artists, significantly enriching the listener experience. Furthermore, even AI-generated music reflects the artist’s musical taste, making it difficult to conclude that it isn’t the musician’s original work.

Opponents, however, argue that originality is paramount in the field of musical art, meaning that works of art should be created, performed, or sung by the artist, not produced by AI tools. This would undoubtedly further disrupt the music industry, making it difficult for listeners to distinguish between genuine artist-created music and AI-generated music. Such behavior could even have serious consequences. Imagine a few years from now, when all the music you play in your headphones is actually generated by AI. Would you still be willing to pay to see your favorite musician perform in person?

This has given me a lot of thought. If such successful artists are using AI to create music, will smaller musicians in the industry adopt AI tools even more aggressively? Will the industry’s order be maintained? Will we no longer hear truly moving and beautiful music by your favourite artist or your favourite artist AI?

I believe the line between right and wrong remains fuzzy. Perhaps only with time and the development of the industry and technology will we know what the future of the music industry holds. But the only certainty is that the success of a piece of music will always be determined by the market—that is, by the ears of the audience. This is undeniable, whether it’s AI-generated or original.

Reference

Kanye West Drops Three Songs From ‘Bully’. HipHopDX, 2025. https://hiphopdx.com/news/kanye-west-drops-three-songs-bully 

Kanye West confirms AI use in Bully album and responds to fan backlash. The Tribune, 2025. https://tribune.com.pk/story/2526701/kanye-west-confirms-ai-use-in-bully-album-and-responds-to-fan-backlash 

Kanye West sort-of releases new album, ‘Bully,’ amid more hateful posts. Los Angeles Times, 2025. https://www.latimes.com/entertainment-arts/music/story/2025-03-19/kanye-west-sort-of-releases-new-album-bully-amid-more-hateful-posts 

Kanye West Shows Off How He’s Using AI In The Making Of ‘Bully’. HotNewHipHop, 2025. https://www.hotnewhiphop.com/882696-kanye-west-shows-off-how-ai-bully-hip-hop-news 

Kanye West Drops ‘Bully’ Album on Streaming—But There’s a Twist. Hot97, 2025. https://www.hot97.com/news/kanye-west-drops-bully-album-on-streaming-but-theres-a-twist/ 

Kanye West drops surprise Bully album – but did he clear the samples? MusicTech, 2025. https://musictech.com/news/kanye-west-bully-album-samples/ 

Please rate this

The Unfulfilled Promise of an AI that can take my Job

30

September

2025

5/5 (1)

With a background in Computer Science, I was able to enter the job market as a software engineer early on. This way I started to work as a programmer at a medium-sized Dutch software company after my first year of studying as a Bachelor. At that time, AI and Generative AI would not have an impact on our line of work for another one and a half years when ChatGPT 3 would launch for the first time.

When working on a large enterprise system for industries with unique and complex processes the complexity in software architecture and class structure increases exponentially. Where in class a coding exercise might have entailed creating a few classes, implementing a few constructors and running a specified set of methods all within a predefined programming language, coding at a software company involves countless additional steps. Even the simplest bug fixes and feature developments require deep knowledge over how a niche subset of the source code functions, a great ability in reading and understanding complex calculations/algorithms ran by the backend and intricate knowledge in not just multiple coding languages (Java, JavaScript, TypeScript…), but also frameworks that run on these languages such as React, Node or Ember.

It was no surprise therefore that all of us were quite intrigued by the potential of AI-assisted coding right from the release of ChatGPT 3. Coding plugins and extensions built into the Integrated Development Environments (IDEs) were already widely utilized within the company for many years and they helped focus on the underlying logical fallacies that need to be solved. With the new Generative AI, however, the premise was that the assistant could assist or take over even this work. After much experimentation and the implementation of an enterprise version of Google Gemini we quickly reached the limits of AI’s coding capabilities in today’s world. After so much drama in the news and a public perception of AI as the coding-killer we found that although Gemini could analyze and correct a couple of individual lines of code, it is not yet able to navigate or store a large codebase to analyze and understand the context of the problem or feature.

Even companies like Microsoft, Google and Meta, who are some of the only organizations on earth able to afford to train their own Gen AI models on their own code, are unable to rely on their AI to fix small bugs autonomously. Too much risk is involved in incorrect design choices, edge case bugs and most importantly the verification process. This process is critical and still requires testing by real humans, who are skilled and competent enough to assess the end results based on chosen requirements.

For us and the rest of the development world, AI coding assistants will stay limited to “chunking” code into deliberately chosen fragments, selected by the developer, that can help the AI in assessing a coding task. A great improvement that can yield automatic generation of “boilerplate code” (repeated code that is commonly used throughout a project), the generation of a “stub implementation” to go off or a list of suggested corrections in case a developer gets hard stuck. Though, generative AI does not nearly constitute a true job killer and even if it could, it will take additional years until full capabilities will be available to the large majority of software companies on earth.

Please rate this

Generative AI, a possibility to communicate with the dead?

30

September

2025

No ratings yet. When people think about generative AI they usually talk about an increase in productivity, taking over redundant tasks, and predications on various outcomes. But companies such as HereAfterAI try to use generative AI to extend a person’s presence even after their death, of course digitally. Imagine having a terminally ill father or mother of a young family who records hours of himself in having conversations, telling stories, or providing advice and guidance. When the day comes and the parent passes away and an AI is taking over, allowing the parent to “communicate” with their family in the same tone and style.

While the idea may sound like a sequence from a ghost movie or a science fiction film, for families in grieve, this might offer an opportunity to live and deal with the loss. Generative AI can offer relatives a form of connection beyond memories, photos, and video recordings. Especially for young children getting to know their parent or asking them for advice and actually receiving an answer, might offer more comfort in grief and create an alternative way of living with that loss. Ramesh K. a grandfather to a young boy, decided to build a digital avatar of himself to allow his grandson to learn from his experience and advice.

While the idea might sound appealing to some, it raises a number of ethical questions and doubts. There is no scientific prove that being able to communicate with a dead relative actually helps or hinders the grieving process. Additionally, how are users protected from misuse and commercialization of their data?

While technological advancements, especially in generative AI, make many things happen, humans could not have dreamed of 20 years ago, it is important to draw a line when and where generative AI crosses a line. Is communicated with the dead already past that line, what do you think?

Please rate this

What I Learned When Half the Class Used the Same AI-Video Tool

29

September

2025

No ratings yet.

Last year during my minor Marketing Sustainable Innovations we worked on marketing plans for local energy cooperatives around Amsterdam. The final assignment was to create a short online video to present our findings and ideas. My group decided to do everything ourselves. We each recorded a part of the voice-over, animated our slides and spent hours syncing both to give the video a personal touch.

When presentation day arrived the whole class gathered to watch the results. I was excited to see what other groups had found during their research at different energy cooperatives. But after a few minutes I noticed something strange. At least four groups had submitted almost the same video. They had all used the same AI generator which produced identical intros, the same background visuals and even a similar narration style. The videos looked slick and professional, but they were hard to tell apart.

That afternoon was my first real experience with generative AI. Not only didn’t I really realize that using AI to generate a video was even an option, I was also impressed by how quickly AI can create something that looked so polished. At the same time I was a little bit disappointed. The creativeness that could have made each project unique, was missing. Instead of presenting different findings and creative styles, many videos turned into this similar AI version.

This taught me a very important lesson. Namely, AI technologies can be a great help, however relying on it too much makes everything look the same and takes away the personal touch. Creativity is not just about getting things done quickly or making something look professional, it is also about telling the story in your own way. These tools can definitely save time and help you get started, but the real uniqueness comes when you add your own ideas and perspective. By finding that balance, we can use technology to support our creativity instead of replacing it.

Please rate this

Cooking for a Crowd with ChatGPT

29

September

2025

No ratings yet.

I’ve experimented with Generative AI in the kitchen last Saturday. Since I didn’t want to order pizza, I used ChatGPT to help me how much food and drinks I needed when I hosted for a large crowd.

Anyone who has ever tried to cook for 8+ people probably knows the struggle. Recipes are usually written for four, and manually multiplying quantities can be annoying and difficult when cooking for an uneven amount of people. So, I decided to see whether ChatGPT could help me. I gave it a pasta recipe for four people and asked it to scale it up for 17 people. It gave me a detailed shopping list with adjusted quantities of pasta, sauce and vegetables.

I was impressed since it saved me a lot of thinking time. But I noticed a big limitation: AI assumes you can just multiply everything linearly. For example, it told me to use 15 tablespoons of olive oil in one pan. That doesn’t actually work, because when cooking for large groups you don’t cook everything at once in a single pan. You need to split the recipe into batches, because doing it all it one pan won’t fit. Or at least, my pan wasn’t big enough.

I also tried the same experiment with cocktails. This worked surprisingly well: scaling up espresso martinis for 17 people gave me an accurate shopping list for coffee, vodka bottles, coffee liquor and edible coffee beans. Still, ChatGPT forgot that the drinks should be made in pitchers or batches, not in one giant shaker. Given that I had to make the food and cocktails for 17 people, I still had to figure out how big every batch needed to be and what fits in my pans and cocktail shakers.

This experiment on Saturday showed me that AI is a great starting point for meal and party planning, but human judgment is still needed when looking at ChatGPT’s generated answers.

Please rate this

The Job Hunt in the Age of Generative AI

29

September

2025

No ratings yet.

One of the most interesting fields I have used Generative AI as of now, is in the context of job applications. I often struggled with wording when creating my resume and cover letters, such as how to appear professional without being generic or how to modify the same experience for different professions. I was able to more clearly reframe my job experience with the assistance of tools like ChatGPT or Perplexity. For instance, they recommended me to reformulate lines like “Supported business development” into “Conducted market analysis and prepared client proposals, helping to improve communication”. Although it didn´t create anything, it did assist me in using more powerful words to express my accomplishments.

Meanwhile, I noticed that businesses are also beginning to use AI in their hiring procedures. AI-powered applicant tracking systems, also known as ATS, that search resumes for keywords are already used by some HR departments. Now, generative AI goes one step further by being able to autonomously create applicant summaries, create job descriptions and even recommend interview questions. In theory, this may result in a quicker and more reliable hiring procedure.

However, the risks are easy to define. AI adoption by recruiters and applicants may turn the process into a sort of “autonomation arms race”. AI-generated keywords are used by applicants to optimise their resumes and by recruiters it is used to filter them. In such a system, the questions come up: What is happening to authenticity? And by favouring particular educational backgrounds or language patterns, who makes sure AI doesn´t reproduce biases?

In my view, generative AI is most useful when it increases clarity rather than when it replaces human decisions. I think the difficulty in hiring is finding a balance between fairness and efficiency. When applied properly, AI can help candidates express themselves more effectively and help businesses manage high application quantities. However, there is the risk of turning humans into automatic patterns and keywords if it takes over the process. It is impossible for an algorithm to fully imitate human characteristics like creativity, motivation and cultural fit.

The key question regarding this interesting topic remains: How can we make sure AI not only speeds up recruiting but also makes it more transparent and inclusive?

Sources:
https://www.cmu.edu/intelligentbusiness/expertise/gen-ai-in-hiring_lee_100323.pdf

https://www.nature.com/articles/s41599-023-02079-x

Please rate this

Using AI – between comfort and caution

29

September

2025

No ratings yet.

Like many others, I have started to use AI more and more. I use it to summarize, help me brainstorm ideas, create images and to better my writing. These tools have become a digital “friend’, that I can hardly go without. But while using it, I often get my myself asking if I can really trust what is given to me.

It brings a lot of comfort, convenience and saves me many hours of work. I can summarize big articles and extract what I want to know, enabling me to focus on a good analysis rather than information gathering. When I want to write an email, AI helps me to outline rough thoughts and ideas into a good and structured piece, and so on. It feels like having an extra set of brains that works faster, better and cannot get tired. There is a sense of relief in the thought that however big a task is, AI is there to help me.

However, I have learned that AI can also be wrong and still “hallucinates”, meaning that it generates output which sounds good, but is not right or based on facts (Chanley T. Howel, 2025). I notice that AI can write in a biased way and come up with ideas that sound good, but break down when taking a closer look. The danger is not solely in the fact that generative AI still makes mistakes, but the temptation to take it as truth and stop thinking critically. Skepticism is not only useful, but necessary.

Moving forward, I think the challenge of GenAI is not in it’s capabilities, it’s about trust. As these tools are getting integrated more into education and work, we need more reliability and better ways to verify information. Until then, I need to balance skepticism with trust. In the end, AI is not there to replace thinking, but to sharpen it and questioning output, is also questioning my own assumptions. And GenAI has been valuable to me in that way, it’s not only about the answers but also about asking better questions.

References:

Chanley T. Howel. (2025, September 24). AI Hallucinations are Creating Real-World Risks for Businesses | Foley & Lardner LLP. https://www.foley.com/p/102l6q1/ai-hallucinations-are-creating-real-world-risks-for-businesses/?utm_source=chatgpt.com

Please rate this