When ChatGPT first came out, I didn’t use it at all. This wasn’t because I didn’t want to, but because my computer was too old to handle it. Looking back, that technical glitch probably saved me from jumping on the hype too early. But once I finally got access in my second year of my bachelor, everything changed.
At first, I only used it for university work: summarizing readings, explaining theories, or helping me structure assignments. Then I started realizing how it could fit into almost every part of my life. I used it to calculate travel times, plan trips, create to-do lists, draft emails, and even come up with meal ideas. Suddenly, I could delegate all the small decisions that usually cluttered my head. It didn’t just change how I studied; it changed how I thought about organizing my time.
Still, I’ve learned that AI can only help, it can’t think for me. It’s incredibly good at generating options or saving time, but it’s terrible at knowing what matters and the biggest yes-man. When I am not critical of its answers, the output becomes generic and sometimes even ‘fake’, missing nuance or creativity. That’s when I’m reminded that these systems don’t actually “understand” us, they only predict and generate from preexisting knowledge what they think we want to hear. They’re only as good as the questions, context, and effort you put in.
Generative AI hasn’t replaced my thinking, but it’s reshaped it. I use it like a tool: powerful, convenient, but not infallible. It’s great at helping me do more and be efficient, but the “why” and “how” of what I create still have to come from me.
Discussion question: Can we truly call AI a partner in crime, or is it just a mirror reflecting the limits of our own input?