ChatGPT can write assignments just as well as students can, and that’s the problem

1

October

2023

No ratings yet.

Up to a few weeks ago I had never personally used generative AI, but having tried it now I am equally excited and disappointed. Chat GPT is like the most intelligent first year student in the class, full of raw potential, but it doesn’t really know anything.

There are many reports of ChatGPT passing academic tests. Terwiesch (2023) indicates that ChatGPT usually produces 50 to 70 percent correct answers and he had previously found that it can pass an MBA program at the Wharton School where he is a professor. From my limited experience it is obvious that ChatGPT has a lot of information but it seems to me that it doesn’t really know anything. Ask it any question and it will give an information packed answer which seems about right, just like any proper university student can. However, the fact that ChatGPT or a student can argue any point does not make it correct. Tell ChatGPT that it is wrong or challenge it and it will smoothly adjust its story to fit your criticism or additions. ChatGPT will not readily make claims that are factually inaccurate, but certainly makes claims that can be (academically) spurious. As part of my AI experiment I decided to double check my answer to a homework assignment question by posing it to ChatGPT. It was plain wrong.

To someone who is new to these concepts the first answer seems quite valid. After I questioned ChatGPT it did correctly identify the moral hazard problem, with all the required argumentation to validate its renewed claim. Chat GPT is a spineless pushover, and by correcting itself it reveals itself to be unreliable.

Another example also illustrates variance in its answers. Hoping to use ChatGPT as a study partner, I asked it to test my knowledge about transaction cost theory. It actually asked an excellent question (to give three determinants), but then applauded my answer even though I deliberately included one wrong determinant. When in a new chat I asked the same question back, ChatGPT luckily did not mention my false answer. And yet, it originally allowed me to think my answer was right when it was wrong.

While testing ChatGPT’s abilities to pass an MBA, Terwiesch (2023) found the same thing, though he applauds ChatGPT’s ability to correct itself after receiving human hints. He does call ChatGPT’s answer quality erratic and describes numerous mistakes that ChatGPT makes with utmost confidence in itself (Terwiesch, 2023). For this reason, Terwiesch (2023) recommends using AI for creative purposes where unpredictability is useful, but not for any serious assignment where accuracy is important.

While I used Chat GPT-3.5, OpenAI has since released GPT-4, which is supposed to produce more accurate and useful results (OpenAI, n.d.). Nevertheless, according to Murgia (2023) GPT-4 suffers the same limitations. Murgia (2023) cites a limited comprehension of context, inability to learn from experience, and mentions ‘hallucinations’ limiting its reliability.

My conclusion is simple. ChatGPT can indeed write assignments as well as students. But students can be wrong, despite being able to make wonderful arguments for any point. ChatGPT has the same problem. It can be very helpful in generating ideas and sparking thought, but it cannot be relied upon. In those cases when requested answers are neither simple facts nor completely open to argumentation, ChatGPT fails. Using it can be very helpful but I trust it less than my fellow students. At least they know how certain they are of their answer. My suggestion is to use ChatGPT as a support, but to rely on your own knowledge and research for anything that actually matters, like your education for example.

References:

Murgia, M. (2023, March 14). ChatGPT maker OpenAI unveils new model GPT-4. Financial Times. https://www.ft.com/content/8bed5cd7-9d1e-4653-8673-f28bb8176385.

OpenAI. (n.d.). GPT-4. OpenAI. Retrieved September 30, 2023, from https://openai.com/gpt-4.

Terwiesch, C. (2023, March 12). Let’s cast a critical eye over business ideas from ChatGPT. Financial Times. https://www.ft.com/content/591ad272-6419-4f2c-9935-caff1d670f08.

Terwiesch, C. (2023). Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course [White paper]. Mack Institute for Innovation Management at the Wharton School, University of Pennsylvania. https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Would-ChatGPT-get-a-Wharton-MBA.pdf.

Please rate this

2 thoughts on “ChatGPT can write assignments just as well as students can, and that’s the problem”

  1. Very insightful post, and I also agree with your arguments! It does rase the question: how long do you think it will take before ChatGPT is developed to the point where it is 80%-100% right all the time (if ever)? Meaning students, as used as an example here, can rely on ChatGPT more than themselves.

Leave a Reply

Your email address will not be published. Required fields are marked *