The use of generative AI by students has increased significantly in recent years, but the question is: how useful are these tools in practice? I have mixed feelings: sometimes brilliant, but sometimes frustrating. During my thesis research, I used ChatGPT for coding. The tool seemed to understand how to code and provided me with long code. I quickly realized this wasn’t efficient; the code was often unnecessarily long, increasing the risk of errors and making it difficult to understand. My next step was watching YouTube coding tutorials. The combination of these tutorials and using ChatGPT proved effective. The generative AI provided direction and ideas, while the tutorials helped with practical implementation. The question is whether generative AI will ever be able to code without human correction.
For my thesis research, I also used the DALL·E program. I’m not a creative person myself, but using this tool suddenly gave me a graphic designer at my disposal. I created my cover page and a few other images with it in fifteen minutes.
The NotebookLM program also proved very useful. When writing a thesis, you’re constantly looking for sources that support your research or show the downsides. These articles are often long and complex. Generative AI can be very useful here; you send the article as a PDF, and the program starts writing a concise summary, which is very useful. At the same time, I wondered: will I miss the nuance and details if I rely too much on these summaries? In practice, it sometimes turned out that important details were missing.
In my opinion, generative AI should be seen as a sparring partner rather than a replacement. What do you think: should we view AI primarily as a tool, or should we accept that it will eventually take over some of our thinking?