While ChatGPT is an amazing tool to get inspired or assisted by while doing work, its answers should be questioned always. Launching the tool in November 2022 caused chaos as well as excitement in the world of AI. Some workplaces, universities or faculties banned the tool, while others provide the right guidelines to work with it and even optimize its results.
Since its launch, the tool has had a lot of different purposes, such as drafting emails, articles, social media posts, but also solving math problems, debug programming codes or generate art (Hetler, 2023). Also mentioned, and one of the things I’ve recently used it for is market research. A few weeks ago I started doing market research for a specific industry, countries and products and I realized ChatGPT could be of great help, so I asked it a few questions.
One of the criteria of my research was that companies had to have been founded in certain countries. At first, I thought ChatGPT was quite literally providing me with lists of companies and products I was looking for: great! However, after diving into these companies I realized ChatGPT was not at all answering correctly to my input. It was indeed providing me lists of companies, but the criteria was not followed, even after specifically altering my input. At first, the generate response even claiming the companies were from certain countries, explaining its whole history. Asking ChatGPT next if X company was from Y country, it would give me the correct response, while apologizing that it was wrong before. Before realizing this was more of a pattern, it only added up to my workload, having to dive deeper into the list of companies individually.
Continuing my research I realized ChatGPT was better to use to gain inspiration and tips, not to use when needing factual answers. And this has actually been recognized by more AI users. Pearl (2022) has done extensive search prompts proving how this AI tool is more often wrong than right, at least with slightly complex factual answers. The capital of a country might be a reliable answer, but anything more complex cannot be relied upon from ChatGPT.
Hopefully AI tool users are provided more guidelines and warnings on how to use these tools. If users are not informed on how to use ChatGPT for instance, incorrect information is spread even further. AI tools can be convenient, depending on the goal of usage. It leaves me with a question also: with further development, do you think AI tools will overcome this problem, or is this still too much of a “human” task to leave to AI?
References
Hetler, A. (2023). ChatGPT. WhatIs.com. https://www.techtarget.com/whatis/definition/ChatGPT#:~:text=ChatGPT%20is%20an%20artificial%20intelligence,%2C%20essays%2C%20code%20and%20emails.
Pearl, M. (2022, 3 december). ChatGPT from OpenAI is a huge step toward a usable answer engine. Unfortunately its answers are horrible. Mashable. https://mashable.com/article/chatgpt-amazing-wrong
Very interesting post! I myself always had the same issue when looking for statistics in chatgpt and could never blindly trust its answers. Nonetheless, I think there is big distinction between what you can accomplish with GPT-3.5 and GPT-4, especially after the new updates. Now that you can quite literally paste links in GPT-4 it makes it much easier to get factually correct data and not just the usual assumptions that you would receive from GPT-3.5. In regards to your question in the end, I definitely think AI tools are already overcoming this issue as we speak and it will be a matter of time until it is fully accurate. I do agree with you though when you say some guidelines need to be in place as many people will struggle to grasp the full potential ChatGPT has to offer and may end up spreading misinformation by accident.
I think you wrote a great blog that emphasized some important issues with AI.
I agree with the other comment on this post that there is a big difference in these issues when looking at ChatGPT 3.5 and ChatGPT 4.
However, the majority of the population is currently using ChatGPT 3.5, and we are not sure when or if ChatGPT 4 will become more accessible to the general population, let alone for free.
In my opinion, this is not a big issue. The fact that we can use ChatGPT for inspiration, in my view, is a valuable function. It leads to better ideas and can also lead to more efficiency, especially in the business world. But, in my opinion, it’s also a good thing that ChatGPT doesn’t provide perfect answers to your opinions yet. This keeps us as humanity sharp, intelligent, and also prevents us from becoming lazy.
If ChatGPT could have perfectly carried out your entire research, would you have been honest and reported it to your employer? Or would you not have reported it and still benefited from the salary?
In my opinion, the complications that come with ChatGPT 3.5 are not all that bad, and it’s actually good that we need to remain vigilant about the answers it generates. But, of course, we don’t know how this will be in the future, which I find very exciting, but also a bit scary..
great insights! i feel like you make a good point regarding humanity keeping sharp, intelligent and preventing of being lazy. I fully agree, and think chatGPT for inspiration is a great tool, but nonetheless a tool.