Once again, a blog post about ChatGPT. And while this blog mostly talks about the benefits that this technique brings, in my opinion, it also needs to be looked at critically.
What I’ve noticed recently is that a lot of students use ChatGPT to get answers to questions related to programming an application or website because they don’t have enough knowledge about it themselves. That sounds great; you ask a model to develop something for you and you barely need to know the programming language yourself. But what are the implications if this is done on a large scale?
I’ve noticed ChatGPT gives identical answers to programming questions. In addition, I see the same snippets of code popping up in multiple places. Often not even because they are written so concisely, but because they come from the same source (a language model such as ChatGPT). Because students are less inclined to think of and write their code, all the elaborations look more and more alike. I think this shows that this progression – which mainly provides convenience to the user – allows developments and innovations to diminish in the long run because the same structures and ways are mostly used.
Think of it like this: If I ask an AI model, which can generate something graphic instead of text, to design something in the style of a certain artist, it will indeed generate something non-existent, but in a pre-existing style. With this, I want to show that the current form of artificial intelligence helps us in a way that always uses pre-existing artefacts to arrive at something and does not arrive at something completely new. With that, AI is strong at reusing a particular technique but not at coming up with something completely new. Will that ultimately cause progress to stagnate?
What do you think?
(image https://bair.berkeley.edu/blog/2022/05/03/human-in-the-loop/)