Have you ever had the feeling that the AI models you are talking to such as, ChatGPT, Gemini or Deepseek sometimes are a bit too agreeable? Not only are they agreeable, but they also rarely question your logic and often accept your assumptions. This raises the question, how helpful is GenAI if it always agrees with you? Are they helping us in structuring our thoughts and being critical or do the just say what we want to hear?
I have used GenAI in a variety of ways, by asking quick questions about how to repair something in my house, by teaching me specific things I am learning for school or by advising me on business idea’s. For this last use case, I quickly noticed that I rarely received critical feedback. Instead, I always got answers that reinforced my assumptions. This experience is in line with what we discussed in class about credible analytics (Vidgen et al., 2017). Here it was shared that data driven insights are only useful if they are; accurate, critical and transparent. Just as that incorrect analytics can result in bad business decisions, AI tools that are overly agreeable can reinforce bias and also result in bad business decisions.
This insight can also be linked to the EU’s AI act on transparency and human oversight. AI tools that are designed to please use could unintentionally amplify misinformation. So how can we make AI more critical? One idea is to design a mode that switches the AI from a yes man into the devils advocate. Another idea would be to recommend, the user to include an additional section in their prompt that the AI could use a guide for that convo.
Would you prefer an AI that is overly agreeable or one that challenges your ideas?