Using AI – between comfort and caution

29

September

2025

No ratings yet.

Like many others, I have started to use AI more and more. I use it to summarize, help me brainstorm ideas, create images and to better my writing. These tools have become a digital “friend’, that I can hardly go without. But while using it, I often get my myself asking if I can really trust what is given to me.

It brings a lot of comfort, convenience and saves me many hours of work. I can summarize big articles and extract what I want to know, enabling me to focus on a good analysis rather than information gathering. When I want to write an email, AI helps me to outline rough thoughts and ideas into a good and structured piece, and so on. It feels like having an extra set of brains that works faster, better and cannot get tired. There is a sense of relief in the thought that however big a task is, AI is there to help me.

However, I have learned that AI can also be wrong and still “hallucinates”, meaning that it generates output which sounds good, but is not right or based on facts (Chanley T. Howel, 2025). I notice that AI can write in a biased way and come up with ideas that sound good, but break down when taking a closer look. The danger is not solely in the fact that generative AI still makes mistakes, but the temptation to take it as truth and stop thinking critically. Skepticism is not only useful, but necessary.

Moving forward, I think the challenge of GenAI is not in it’s capabilities, it’s about trust. As these tools are getting integrated more into education and work, we need more reliability and better ways to verify information. Until then, I need to balance skepticism with trust. In the end, AI is not there to replace thinking, but to sharpen it and questioning output, is also questioning my own assumptions. And GenAI has been valuable to me in that way, it’s not only about the answers but also about asking better questions.

References:

Chanley T. Howel. (2025, September 24). AI Hallucinations are Creating Real-World Risks for Businesses | Foley & Lardner LLP. https://www.foley.com/p/102l6q1/ai-hallucinations-are-creating-real-world-risks-for-businesses/?utm_source=chatgpt.com

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *