AI Makes Me Faster.. and a Little Dumber

2

October

2025

5/5 (1)

When I first started using ChatGPT I used it more for questions I normally asked Google, I saw it as a smart assistant, which answered really fast. After that I found out, it might be useful to use on repetitive tasks like composing emails or summarizing large amounts of text. The more I worked with it, though, my experience taught me that generative AI is so much more than a productivity tool. It’s changed the way I work when it comes to creativity, problem-solving, and even collaboration.

One feature I like best is its flexibility. For a course last year, I requested ChatGPT to rewrite an academic essay for me and other times, to generate brainstorming questions on concepts in visual design. Testimony confirms its flexibility, while ChatGPT and the rest of its peer models are deeply suited to text-to-text applications, its transformer architecture allows it to generalize to a broad set of tasks across a broad set of domains, from coding to translation and reasoning (OpenAI, 2023). This flexibility makes generative AI a “general-purpose technology” with sweeping consequences (Brynjolfsson, Li & Raymond, 2023).

Positive things aside, ChatGPT has his own limits. Occasionally ChatGPT provides you with information that sounds true but is not actually true, what researchers refer to as “hallucinations” (Ji et al., 2023). When I employed it to produce some literature review, for example, I had to check each reference extremely carefully. It does make one faster, but you cannot trust it 100% so the accuracy is still your problem. Another disadvantage is that it is context-sensitive. If the assignment becomes too lengthy or complex, the questions must be properly written, otherwise, the system resorts to generic replies. This is of course only a problem when you must use ChatGPT, if you have to write your own assignments it is not smart to use ChatGPT.

There is the less obvious downside in my opinions. This is that sometimes I find myself relying on it too much. ChatGPT is able to spit out answers rapidly, and I can catch myself accepting them without actually having a good look at them in greater detail. It makes me “a little stupider” as a result, since I do not learn things myself. That has taught me how much we need to use AI critically, as a tool of thinking and not a replacement for it. Still, I catch myself being lazy and simply believe what the chat said.

To be reliable in the future on ChatGPT can do three things, only use trusted academic and professionals databases, so the hallucinations will be less. Second should be showing users how he came to the answer.  Third make the software an open-source so developers and communities can improve the AI. The question is do we want this. If ChatGPT can use all databases, writing something of your own will be harder and if the Chat can do it way quicker than you can why would you do it.

As of this moment, in my opinion ChatGPT and other generative AI should be like an smart and quick assistant. It should not give you the feeling that you do not have to think for yourself anymore. I am curious what the future of ChatGPT will bring and how everybody will adapt to the changes of AI.

References

  • Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. Stanford Digital Economy Lab Working Paper. w31161.pdf
  • Ji, Z., Lee, N., Fries, J., Yu, T., & Jurafsky, D. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. Survey of Hallucination in Natural Language Generation
  • OpenAI. (2023). GPT-4 Technical Report. Retrieved from 2303.08774

Please rate this

5 thoughts on “AI Makes Me Faster.. and a Little Dumber”

  1. Very thoughtful piece of writing! I think it clearly outlines the issues we all face today with ChatGPT, the over-reliance, and the laziness to write our own essays from scratch. It does have some gems, like helping with writer’s block and stimulating creativity for those who feel they lack it, to name a few. You point out a very critical perspective, that we must use it as an assistant rather than a replacement for our work, because otherwise we will become even lazier. Moderation is a key principle in this discussion – not becoming overly dependent and actually resisting using it when it is really not necessary, but merely easier. There is so much to say about this topic, but in essence, it’s society that needs to become more aware of how to continue challenging our intellect and using AI to make us smarter, not dumber!

  2. I can find myself in what you are saying. At first, I only used ChatGPT to ask questions I could have just googled, but now I sometimes rely on it to do most of my work. It makes me lazy, and I feel like I understand less than I used to.
    I also think that AI will play a big role in the future, but instead of letting it do all the work for us, we should focus on learning how to use it properly. It should guide us towards the right answer rather than a replacement for our own thinking. That way we can still practice and learn critical thinking and also continue developing our writing skills. Because, in the end, the real challenge is to learn to ask better questions, and not just accept every answer.
    I’m curious how others are managing this balance between using AI as a helpful tool and keeping their own skills sharp.

  3. I definitely agree with your experience of using ChatGPT. During my thesis, I tried to use ChatGPT to find sources for my research. However, in almost all cases, it gave citations that either didn’t make any sense in the context or didn’t even exist. This showed me that we have to be very careful about the output that ChatGPT provides us with. I also feel like AI is making me a little more stupid (or at least lazy) as well. In the first year of my bachelor’s, I had to code and fix errors without using AI tools. This required critical thinking and problem-solving skills. During my third year, however, I could simply just upload my code, and ChatGPT would point out where a mistake was made and how to fix it, and I didn’t even have to understand why something was wrong. I think AI chatbots are definitely helpful in revising, giving feedback, and pointing out errors. However, we should always be careful that we understand why something is wrong, and keep checking its answers by using logical/critical thinking, so we don’t blindly trust any output ChatGPT provides.

  4. I believe this is a shared experience across the current generation of students. I definitely felt this way when I started writing my thesis and needed help writing some code as well as generating some key pointers from extensive literature. I found myself trusting the generated responses a bit too much. So much so that I had a lingering doubt a couple of weeks before submission and decided to read through the generated notes only to find several errors and made up extractions. I was extremely glad I caught it early on, it definitely taught me to be more cautious and attentive when using AI for my projects. Currently, I only use ChatGPT to receive feedback on the work I have already written. I find that this typically doesn’t drain me of my creativity but makes me think deeper about any gaps that are remaining.

    Truthfully, I don’t think AI is going to be sidelined any time soon. By the looks of measures taken by schools and universities to detect AI, it only seems like the reliance on AI has been steadily increasing among the demographic. Perhaps we should be more careful to not lose our ‘human touch’ to an intangible and artificial ‘being’. I believe a significant reasons behind the drawbacks of AI can be resolved by addressing AI hallucinations and finding a way to minimize them.

  5. I have to strongly agree, over the more discussed issues regarding where AI makes mistakes and tends to fail. We are overlooking the effect it has on everyday people. I tend to realize that lately I am relying too much on AI, generating ideas, brainstorming, or just starting a simple project seems like a serious task suddenly. People stopped to think about issues in a broader context, but tend to approach things like AI. I can realize how harder it gets to put a continuous text together, or even generate an email. When I write, I make so many grammar mistakes suddenly and then I just put it to AI to check the errors, just like now.

    People also don’t test their memory much anymore as they can immediately look up the information, and the brain adjusts to it. Maybe schools should start implementing more essays in class, solving issues without the internet and so on, detecting AI in the work will not be enough.

    We are somehow getting better at complex tasks, but worsening in the simple everyday tasks, that are in the end more meaningful.

Leave a Reply

Your email address will not be published. Required fields are marked *