Is GenAI Making Us More Productive or Just Lazy? An Analysis on Studying with Large Language Models (LLMs)

3

October

2024

No ratings yet.

Generative AI has already given us an abundance of opportunities which were not deemed possible only 3 years ago. From having a crappy version of ChatGPT (which was already groundbreaking at the time) to generated videos which can hardly be recognized as AI. The technologies are being improved regularly, leading to better versions of LLMs (such as ChatGPT and Google’s Gemini) which ultimately makes many tasks easier and faster to perform for humans.

If I relate it to my personal situation regarding using these text generating forms of AI, I can say it has made studying easier. From simply using it to improve my writing in formal language and grammar, to asking for solutions on specific assignment-related questions. Especially the latter has made tasks easier to do as the GenAI can provide with a starting point to use as a thought. If I for example ask Gemini to give me three examples of firms that use data monetization and how these firms do so, almost instantly I receive an answer which I can use to start in my answer. The time it saves me doing my own research on these simple questions might look marginal, but once added up can really make a difference. Therefore, one can argue that these forms of GenAI increase a student’s productivity quite significantly.

However, a potential downside I have noticed in this process, is the passive learning I experience. With using AI, I hand the thinking process to Gemini, after which I only have to read and reason which response best suits the question I need to answer. If an answer is not to my liking, I can simply ask the LLM to give a new answer, or slightly change my prompt in the hope it changes the answer accordingly. My own learning might become more passive than actively having to think on my own answers. This can be considered as becoming lazy in my opinion. During the guest lecture, Stefan van Duin also mentioned that one of his biggest concerns regarding GenAI in the future, is that it can make us lazy human beings. In the context of this thought, I feel the process of making us lazy has definitely already began.

In conclusion, I argue that the LLMs have made us more productive and gave us opportunities of enriching ourselves much easier and faster than we might have ever could have imagined. However, we should be careful in the usage of AI and our own development. We should not become lazy in our own thinking process and fully utilize LLMs for our tasks. Perhaps universities should focus on making assignments that are not possible to do with these LLMs. All in all, It is a double-edged sword for which we need to be careful in the increasingly better LLMs that we will see in the, foreseeable, future.

(I couldn’t come up with a saying for the last sentence. So, I intuitively asked Gemini, who quickly came up with ‘double-edged sword’. Just an example on how embedded it might already be in my system!)

Please rate this

1 thought on “Is GenAI Making Us More Productive or Just Lazy? An Analysis on Studying with Large Language Models (LLMs)”

  1. Hi Gijs, nice article!
    I started using chatGPT for coding-related problems, but then quickly realized it could be used for millions of different tasks: from asking for a recipe with what’s left in my fridge to give me ideas on what to do if I’m ever bored in class, from summarizing really long academic articles to create a sort of customized “RPG adventure” in the style of a gamebook. Honestly, my usage of chatGPT has increased dramatically since I started this semester at RSM. The reason, as you mentioned, is simple: having a lot of “small tasks” to do (really different from the big, infrequent tasks I had to do in my previous university) it’s really easy to feed them to the LLM, “just to have a starting point”. However, I would argue – as you correctly mentioned – that is the starting point itself what really makes you learn something. Like in mathematics, geometry or physics problems (my previous field of study), the focal point is to go in that trial and error phase, trying to “bang your head”, as we say in Italy. In the example you made, finding the 3 organization yourself would’ve helped you better understand how firms can monetize data, improving your learning experience. And I really don’t want to patronize you, it’s something I’m doing more and more as well, and I think everyone is doing it more and more, in fact. As you said, in the end it’s up to us to know where to use LLMs and where to stop, in order not to interfere with our cognitive capabilities and becoming “lazy people” who use AI for everything. Speaking of this, I recommend you Noah van Lienden’s article “console.log(“Can I Actually Code?”);”, on this blog.
    AI is a tool, and as every other tool, it’s ultimately up to us to use it in the right way.

Leave a Reply

Your email address will not be published. Required fields are marked *