AI as a Pacifier: Why Face a Challenge?

8

October

2025

No ratings yet.

During one of my bachelor’s courses about technology augmented behaviour, we discussed the role of the smartphone as an adult ‘pacifier’. In the research article by Melumad & Pham (2020), the smartphone is proposed to act as as tool that can provide users with emotional comfort, reassurance, and even stress relief. Whenever we feel an unwanted emotion, such as boredom, insecurity, or sadness, this device is a pocket-reach away from distracting us. 

Nowadays, whenever I am confronted with the choice, ‘Do I use generative AI or not?’, I cannot help but view AI as a similar sort of pacifying technology. There have already been instances in which people are frequently using Chatgpt as their personal therapist (Collins et al., 2025), turning to tech for emotional support. But genAI can also take away many (frustrating) obstacles we may face when doing academic and/or corporate work. 

Personally, I first started using AI about a year before I took the bachelor course. When I was unsure of my own writing, I let Chatgpt improve it. When I was overwhelmed with the content of a scientific article I had to read, I uploaded it to PopAi and the platform summarised it for me. When I couldn’t come up with a good idea for an assignment, or I was frustrated finding relevant sources, genAI was just a search away to help me out. It is incredible that our technology has developed to such an extent that this is even possible, and yet, there came a point at which I began to question my own technologically augmented behaviour. 

As time went on, I did not become a better writer by using AI. I did not become more creative in generating my own ideas, at doing research, or at grasping scientific articles. Because whenever I was faced with any obstacle, AI let me walk around it. Just like the smartphone offers continuous distraction, genAI provides the continuous outsourcing of work. The essential question is: where do we draw a line? Should we be teaching students to use one genAI platform (e.g., Chatgpt) to generate input for another genAI platform (e.g., Lovable, Gamma, v0) because ‘the output tends to be better that way’, like in our prototyping guest lecture? 

Of course, genAI can also be a great tool in the process of creating one’s own work output. And yet, I wonder whether our overall compulsive genAI use will have significant consequences for our near-future ability to create and think critically for ourselves. 

By allowing ourselves to always walk around the obstacle in front of us, we are deriving ourselves from any sort of challenge that is worth facing. 

References

Collins, A. C., Lekkas, D., Heinz, M. V., Annor, J., Ruan, F., & Jacobson, N. C. (2025). ChatGPT as therapy: A qualitative and network-based thematic profiling of shared experiences, attitudes, and beliefs on Reddit. PubMed191, 277–284. https://doi.org/10.1016/j.jpsychires.2025.09.057

Melumad, S., & Pham, M. T. (2020). The Smartphone as a Pacifying Technology. Journal Of Consumer Research47(2), 237–255. https://doi.org/10.1093/jcr/ucaa005

Please rate this

2 thoughts on “AI as a Pacifier: Why Face a Challenge?”

  1. This blog post reminds me of a story written by Isaac Asimov called Profession. (Spoiler alert.) In this short novel, he depicts a future where children have knowledge and skills directly “taped” into their brains on a special day, completely replacing traditional learning. To perform this taping, every child is tested to determine the most suitable profession. However, the main character of the story fails the test, is deemed unfit for the procedure, and is placed in a special facility. His roommate is secretly a psychologist who observes him as a part of a special selection process.

    In the end, it turns out that this facility was created for people with exceptionally high cognitive abilities. It is used to select a subset of them and employ as those who create the knowledge for these tapes, which are later mass-distributed. However, this process requires studying in the old-fashioned way, because to advance anything, you first need to understand its foundations.

    This division corresponds with the two approaches to getting things done discussed in the blog post. One is the easy path, using generative or other forms of AI to complete a task for you, but the sacrifice is that you don’t learn how to perform the process yourself. Eventually, the ceiling of your skill is defined by that limitation, because to tackle more complex tasks, you must first build that foundational understanding. The second approach is choosing not to use AI which means learning by yourself at the beginning, sacrificing convenience and time.

    This conceptual question is highly relevant for us as students and future professionals. The traditional education we have been pursuing has always been about building that foundation, enabling us, as the future of academia, to make advancements and breakthroughs in our respective disciplines. In contrast, the work environment challenges us to get things done, where apparently nobody really cares how it’s done as long as the task is completed.

    This is indeed a very interesting debate raised by the author, and many of the questions it brings up unfortunately do not have a single right answer.

  2. Hey Estelle, really interesting blog!
    I personally believe that the line should be drawn by the learning objective. I agree with you that for foundational skills like critical thinking or writing, using AI should be heavily discouraged or even forbidden. Struggling with an obstacle is the entire point of these exercises. However, for some tasks I think AI is a necessary professional skill, e.g., exploring data patterns or managing a content pipeline. I think the use of AI shouldn’t be seen as inherently better or worse than not using it. It should be framed as a specific tool with a trade-off: it offers speed at the potential cost of deep understanding.

Leave a Reply

Your email address will not be published. Required fields are marked *