When I first started using ChatGPT I used it more for questions I normally asked Google, I saw it as a smart assistant, which answered really fast. After that I found out, it might be useful to use on repetitive tasks like composing emails or summarizing large amounts of text. The more I worked with it, though, my experience taught me that generative AI is so much more than a productivity tool. It’s changed the way I work when it comes to creativity, problem-solving, and even collaboration.
One feature I like best is its flexibility. For a course last year, I requested ChatGPT to rewrite an academic essay for me and other times, to generate brainstorming questions on concepts in visual design. Testimony confirms its flexibility, while ChatGPT and the rest of its peer models are deeply suited to text-to-text applications, its transformer architecture allows it to generalize to a broad set of tasks across a broad set of domains, from coding to translation and reasoning (OpenAI, 2023). This flexibility makes generative AI a “general-purpose technology” with sweeping consequences (Brynjolfsson, Li & Raymond, 2023).
Positive things aside, ChatGPT has his own limits. Occasionally ChatGPT provides you with information that sounds true but is not actually true, what researchers refer to as “hallucinations” (Ji et al., 2023). When I employed it to produce some literature review, for example, I had to check each reference extremely carefully. It does make one faster, but you cannot trust it 100% so the accuracy is still your problem. Another disadvantage is that it is context-sensitive. If the assignment becomes too lengthy or complex, the questions must be properly written, otherwise, the system resorts to generic replies. This is of course only a problem when you must use ChatGPT, if you have to write your own assignments it is not smart to use ChatGPT.
There is the less obvious downside in my opinions. This is that sometimes I find myself relying on it too much. ChatGPT is able to spit out answers rapidly, and I can catch myself accepting them without actually having a good look at them in greater detail. It makes me “a little stupider” as a result, since I do not learn things myself. That has taught me how much we need to use AI critically, as a tool of thinking and not a replacement for it. Still, I catch myself being lazy and simply believe what the chat said.
To be reliable in the future on ChatGPT can do three things, only use trusted academic and professionals databases, so the hallucinations will be less. Second should be showing users how he came to the answer. Third make the software an open-source so developers and communities can improve the AI. The question is do we want this. If ChatGPT can use all databases, writing something of your own will be harder and if the Chat can do it way quicker than you can why would you do it.
As of this moment, in my opinion ChatGPT and other generative AI should be like an smart and quick assistant. It should not give you the feeling that you do not have to think for yourself anymore. I am curious what the future of ChatGPT will bring and how everybody will adapt to the changes of AI.
References
- Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. Stanford Digital Economy Lab Working Paper. w31161.pdf
- Ji, Z., Lee, N., Fries, J., Yu, T., & Jurafsky, D. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. Survey of Hallucination in Natural Language Generation
- OpenAI. (2023). GPT-4 Technical Report. Retrieved from 2303.08774