Like a large majority of my age group, I have incorporated the use of Generative AI tools into my weekly digital activities, a majority of which being academic-related. When I am studying readings, AI can organise the input I provide into text of any format. If there are long passages, organising the information in bullet points allows me to interact with the text instead of just focusing on understanding the text. For example, I have noticed that when I get AI to reconstruct the text in a way that makes it more digestible for me, I have been more inclined to interact with the text like asking questions from pure interest in the content to deepen my understanding.
I have also used AI as a search engine for obtaining pieces of information that I would otherwise have to browse multiple websites for. An example of this is when I use generative AI for coupon codes before I make online purchases. There are publicly available discount codes for websites that are generally scattered across many websites, and asking AI if there are any publicly available and usable coupon codes before making a purchase has led to a few discounts. Though not always successful, the minimal effort makes it worth it.
As a combination of search optimization and sales perpetrated by AI, I have also asked AI to recommend activities to do with x people in x location. AI is able to collect general details like your budget and a broad idea of how you want the activity to be, and provide multiple different activities and links. This raises the question of whether marketing schemes can be integrated to AI in order to influence people towards purchases, instead of providing an “objective” answer. If this was/ becomes possible, AI would be able to manipulate users and abuse the trust that they have. For example, when explaining a concept to a student, it could recommend/ advertise an online tutor that has paid AI to do so. When AI is giving place recommendations, restaurants would be able to advertise their own place which would then result in skewed recommendations created by AI. The problem with this lies in the idea that the output would be biased and possibly less reliable. It is therefore important that legislative bodies and AI developers acknowledge the risk of manipulating over-trusting users without their knowledge if AI starts to incorporate marketing in order to make more profit.