The last few years AI has slowly entered my everyday routine. It helps me with: making grocery lists from a picture of a recipe I want to cook, testing how a new dining table might look in my living room, or designing a personalized invitation for a gathering I’m hosting.
Not long ago, styling my home meant flipping through interior design magazines or making a lot of Pinterest boards. Now, I simply ask ChatGPT: “What should I put on this wall?” or generate a mock-up of furniture in my living room. This shift is convenient; it reflects digital disruption, when “new technologies break down industry barriers, creating opportunity while destroying other business models” (Li, 2025). Interior designers face, for example, the competition with these Gen AI-powered tools, which offer instant and low-cost alternatives.
What I find most powerful is the hyper-personalization of Generative AI. With just one prompt, I can generate an invitation that perfectly fits my event I am hosting, or even adapt a recipe by asking, “Make this vegan.” These are very specific and niche requests. This reminds me of the Long Tail theory, where “a significant portion of Amazon sales come from obscure books that are not available in bookstores” (Li, n.d.). AI opens up a similar long tail of creativity: while each output might have low overall demand, as not everyone wants to have an invitation personalized with my details, all these demands summed up represent huge value.
However, not every outcome is perfect. I occasionally get bizarre results when I use AI, for example, to style my house. Think of floating furniture, extra plants that never existed, or strangely distorted stuff. These moments highlight the challenges Gen AI has: biases, black-box algorithms, unreliability, misinformation, and privacy. AI can augment human judgment, which can cause conscious and unconscious biases. Algorithms can be unreliable; they can produce flawed or even unsafe outcomes. It is practically impossible to determine how an input resulted in a certain output in many models since they function as “black boxes”. Improved techniques and technologies make it easier to make fake information. Lastly, personalization increases content relevance but increases privacy concerns (Media support center, 2020).
These challenges are worrying, and this is where human judgment remains essential. Generative AI can suggest, but I have to curate. I need to filter out what is unrealistic and not important, and then decide which ideas truly add value to my life (Agrawal et al., 2022).
Bibliography
Agrawal, A., Gans, J. and Goldfarb, A., 2022. From prediction to transformation. Harvard Business Review, 100(11-12).
Li, T. (2025). Information Strategy Session 2 Theory [Presentatieslides]. Rotterdam School Of Management, Rotterdam, Zuid Holland, Nederland. Canvas. Geraadpleegd op 11 september 2025, van https://canvas.eur.nl/courses/53279/modules
Media support center. (2020, October 1). Challenges and opportunities of AI [Video]. YouTube. https://www.youtube.com/watch?v=BYqzaeeRL-8