Using AI – between comfort and caution

29

September

2025

No ratings yet.

Like many others, I have started to use AI more and more. I use it to summarize, help me brainstorm ideas, create images and to better my writing. These tools have become a digital “friend’, that I can hardly go without. But while using it, I often get my myself asking if I can really trust what is given to me.

It brings a lot of comfort, convenience and saves me many hours of work. I can summarize big articles and extract what I want to know, enabling me to focus on a good analysis rather than information gathering. When I want to write an email, AI helps me to outline rough thoughts and ideas into a good and structured piece, and so on. It feels like having an extra set of brains that works faster, better and cannot get tired. There is a sense of relief in the thought that however big a task is, AI is there to help me.

However, I have learned that AI can also be wrong and still “hallucinates”, meaning that it generates output which sounds good, but is not right or based on facts (Chanley T. Howel, 2025). I notice that AI can write in a biased way and come up with ideas that sound good, but break down when taking a closer look. The danger is not solely in the fact that generative AI still makes mistakes, but the temptation to take it as truth and stop thinking critically. Skepticism is not only useful, but necessary.

Moving forward, I think the challenge of GenAI is not in it’s capabilities, it’s about trust. As these tools are getting integrated more into education and work, we need more reliability and better ways to verify information. Until then, I need to balance skepticism with trust. In the end, AI is not there to replace thinking, but to sharpen it and questioning output, is also questioning my own assumptions. And GenAI has been valuable to me in that way, it’s not only about the answers but also about asking better questions.

References:

Chanley T. Howel. (2025, September 24). AI Hallucinations are Creating Real-World Risks for Businesses | Foley & Lardner LLP. https://www.foley.com/p/102l6q1/ai-hallucinations-are-creating-real-world-risks-for-businesses/?utm_source=chatgpt.com

Please rate this

AI in healthcare – supporting doctors and saving lives.

17

September

2025

5/5 (1)

Just a few years ago, AI was a futuristic concept. Now, AI is even transforming healthcare. The rapid developments in AI technology offer astonishing opportunities for healthcare, it can be used to increase efficiency, make it more accessible, economically sustainable and used in practice(Artificial Intelligence in Healthcare, 2025).

What I think is really interesting and quite impressive, is the practical application of AI in clinical practice, which includes the following two:

Early detection of Sepsis: AI is already used in intensive care units to detect the upcoming of sepsis, a disease that costs millions of lives a year. It can prevent these life threatening situations, by giving fast and valuable insights (Prenosis, 2024).

Breast cancer detection: There is a reduction in mortality of roughly 40% among women that attend for breast screening. However, screening every woman is very time consuming while the workforces in hospitals are under pressure. Only in the Netherlands 1 million women are screened a year, but since reading the mammograms is quite prone to errors, double readings are required in most countries. Despite all this, still the cancer gets missed sometimes. Studies now show that AI can analyze the mammograms without intervention of a doctor, heavily reducing the workload and even the potential to improve quality of screenings (AI For Breast Cancer Screening May Replace Radiologists, 2023, June 6).

I find these developments of AI usage exciting. It shows that AI can not only be used to improve efficiency, but is also able to save lives and really improve the the care system in ways that humans are not capable of alone. However at the same time, I think it is important to keep it nuanced. AI can support doctors, but human insight remains very important in ethical and complex situations. Additionally, doctors remain accountable for their decisions and cannot rely fully on AI. So personally, I see AI as a partner rather than a replacement in healthcare. It can improve outcomes and give the doctors more time to focus on the human aspects of their work.

References:

AI for breast cancer screening may replace radiologists. (2023, jun 6). Radboudumc. https://www.radboudumc.nl/en/news/2023/ai-for-breast-cancer-screening-may-replace-radiologists

Artificial Intelligence in healthcare. (2025, August 8). Public Health. https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en

Prenosis. (2024, December 5). Sepsis ImmunoScoreTM named a TIME Best Invention of 2024 | Prenosis. https://prenosis.com/news/sepsis-immunoscore-named-a-time-best-invention-of-2024/?utm_source=chatgpt.com

Please rate this