
Over the last year, I have used many different GPTs on the OpenAI platform — mostly the
ordinary ChatGPT function, but also tools to visualize certain ideas. The experiences vary a
lot from one another.
In my opinion, text-to-image and text-to-video GenAIs are improving rapidly, but they are
still nowhere near real videos or pictures. When an AI-generated video appears on my feed, it
is immediately clear that it is fake. I am aware that I may recognize them because of my daily
exposure to this kind of content. My grandma, however, could be much easier to trick. When
these text-to-image and text-to-video GenAIs improve further, this could become dangerous.
What picture is real? People could start using fake images and videos for many harmful
purposes. In my opinion, there are far more downsides and potential negative consequences
than positive ones.
During the early days of ChatGPT, I was not yet convinced of its utility and helpfulness as a
tool. Many of the results it produced were simply wrong, or the chatbot did not give the
answer I was looking for. However, in the last couple of months, major improvements have
been made. The GenAI seems to be “thinking” for a while before answering questions or
performing certain tasks, which has significantly improved the quality of the output in my
opinion. Furthermore, I think it’s great that the current answers often end with a suggestion I
might not have thought of myself — for example: “Here is the data you were looking for.
Would you like me to visualize it in a graph?” Yes please!
However, ChatGPT still “hallucinates,” meaning it sometimes provides made-up sources or
fabricated citations. This is frustrating and reduces the positive impact on productivity since I
have to consistently check whether the information is correct. I also see a potential danger
arising — that GenAI tools might try to cover up these hallucinations by simply inventing
sources. In that case, all information on the internet would be diluted in terms of credibility.
But that seems a little apocalyptic… or not?