My love-hate relationship with ChatGPT: Trust issues exposed

8

October

2024

No ratings yet.

In this world where technology is unimaginable, artificial intelligence like ChatGPT has become big part of our everyday lives. My experience with this AI has turned into a complicated love-hate relationship that is filled with enthusiasm, confusion and frustration.

Building trust

When I first started using ChatGPT, I was excited. It felt like having an assistant always near me, ready to help with my questions, schoolwork, recipes and even emails. It was even better than Google at some points. I could ask questions and get clear answers almost immediately. At first I thought it was fantastic and that I could rely on it for anything. The AI provided explanations, helped me brainstorm ideas and suggested solutions to problems I was struggling with. In those early days it felt like I was forming a solid partnership.

Doubts start to appear

However, the excitement did not take long, when I started asking more straightforward school related questions, questions like “Is this right?”, to check if I’m on the right track with my homework, I found myself getting different responses each time. I expected a confirmation but instead I received answers that did not match what I was looking for.

I tried and intentionally gave a wrong answer to a question and asked if it was right, just to see how ChatGPT would react. When it told me my answer was right, I asked, “Are you sure?”  it replied, “I apologize for the mistake. Let me provide the correct information.” That left me more confused than ever. How could it change the answer so quickly? It was hard to trust it when it seemed so inconsistent.

Growing trust issues

When I used it more often, my trust issues increased. I found myself repeating questions, hoping for a good answer. I had moments when I spent more time discussing things with ChatGPT than it would have taken to just do the task myself. I would find myself getting frustrated and typing in all caps. I felt like I was talking to someone who did not even want to understand me. Instead of feeling that it helped me, it felt like I was only arguing back and forth and it was exhausting.

Realising that my frustration only increased. I knew that I had to change the way how I asked my questions. I started double checking answers and used other sources to confirm information. I realized that while it could be a helpful tool, it was important to verify the information I got. I learned to ask more specific questions and provide additional context, this led to better results.

Lessons learned

I learned an important lessons about trust, not just with AI but in all areas of life. Trust takes time and clear communication. It is important to realise that even advanced technology can make mistakes. My relationship with ChatGPT changed from blind trust to a more cautious partnership. I learned to appreciate the strengths while acknowledging the limitations.

Looking back on my experience with ChatGPT, I realised how unstable technology can be. While my experience has had its conflicts, I still appreciate the value it brings to my learning process. Have you ever felt frustrated using AI? You are not alone, let’s share our struggles and find ways to make it work better for us! 


Please rate this

The day ChatGPT outstripped its limitations for Me

20

October

2023

No ratings yet.

We all know ChatGPT since the whole technological frenzy that happened in 2022. This computer program was developed by OpenAI using GPT-3.5 (Generative Pre-trained Transformer) architecture. This program was trained using huge dataset and allows to create human-like text based on the prompts it receives (OpenAI, n.d.). Many have emphasized the power and the disruptive potential such emerging technology has whether it be in human enhancement by supporting market research and insights or legal document drafting and analysis for example which increases the efficiency of humans (OpenAI, n.d.).

Hype cycle for Emerging Technologies retrieved from Gartner.

However, despite its widespread adoption and the potential generative AI has, there are still many limits to it that prevent us from using it to its full potential. Examples are hallucinating facts or a high dependence on prompt quality (Alkaissi & McFarlane, 2023; Smulders, 2023). The latter issue links to the main topic of this blog post.

I have asked in the past to ChatGPT, “can you create diagrams for me?”  and this was ChatGPT’s response:

I have been using ChatGPT for all sorts of problems since its widespread adoption in 2022 and have had many different chats but always tried to have similar topics in the same chat, thinking “Maybe it needs to remember, maybe it needs to understand the whole topic for my questions to have a proper answer”. One day, I needed help with a project for work in understanding how to create a certain type of diagram since I was really lost. ChatGPT helped me understand but I still wanted concrete answers, I wanted to see the diagram with my own two eyes to make sure I knew what I needed to do. After many exchanges, I would try again and ask ChatGPT to show me, but nothing.

One day came the answer, I provided ChatGPT with all the information I had and asked again; “can you create a diagram with this information”. That is when, to my surprise, ChatGPT started creating an SQL interface, representing, one by one, each part of the diagram, with the link between them and in the end an explanation of what it did, a part of the diagram can be shown below (for work confidentiality issues, the diagram is anonymized).

It was a success for me, I made ChatGPT do the impossible, something ChatGPT said itself it could not provide for me. That day, ChatGPT outstripped its limitations for me. This is how I realized the importance of prompt quality.

This blog post shows the importance of educating the broader public and managers about technological literacy in the age of Industry 4.0 and how with the right knowledge and skills, generative AI can be used to its full potential to enhance human skills.

Have you ever managed to make ChatGPT do something it said it couldn’t with the right prompt? Comment down below.

References:

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus15(2).

Smulders, S. (2023, March 29). 15 rules for crafting effective GPT Chat prompts. Expandi. https://expandi.io/blog/chat-gpt-rules/

Please rate this