Over recent months Generative AI tools, specifically ChatGPT has completely transformed how I access information and complete tasks. Traditional search engines like Google provide an overwhelming number of links and resources while ChatGPT delivers a concise, unique response that is specific to your needs. This has made Google almost entirely obsolete for most of my particularly specific questions.
Whether I am brainstorming ideas or solutions for assignments or looking for an explanation on a complex topic, ChatGPT has very clearly become my go to source. A recent example I had with this just the other day was that I was getting ready for an interview and ChatGPT became an important resource that I used to get ready. It provided me with practice questions and feedback on my responses to make them more refined and applicable to the position I was applying for. I found this to be highly valuable as it provided a way to boost my confidence and practice my answer for the actual interview.
Extending past academics GenAI has helped even in areas such as creative writing and coding in my past time. Its ability to provide clear text instantly has changed the way that I work when I become stuck or need to do research. Despite its usefulness in all these areas I have had several instances of information inaccuracy.
Although getting a single refined source for my information helps with efficiency, this becomes a significant issue when the information starts to become inaccurate. An example I had was when conducting research for a literature review I realised that upon further investigation certain sources that ChatGPT provided me were completely fabricated. This opened my eyes to the dangers that GenAI can have. I had personally become so used to using ChatGPT as my current main source for information gathering that I had built this feeling of trust in its responses, and this is a dangerous precedent. While Google can be overwhelming for answering certain questions, having such a large array of sources can help to detect misinformation. With GenAI getting a singular response can make it difficult to determine what’s truly accurate.
Moving forward I think further enhancements in accountability, user interface and source tracking could make these tools even more effective then they already are. An example would be to integrate more advanced personal context based on prior conversations. This of course brings additional security risks but if this can be made safe it would help further tailor answers to users needs. In general my experience with GenAI has completely changed the way I gather information but with it also brings the question: Is the current standard of GenAI the kind of tool that we want replacing traditional browsers like Google when information inaccuracy can be so damaging and difficult to detect?