The Future of Marketing in an AI Moderated Digital World

16

October

2023

No ratings yet.

Currently companies can apply online marketing in various ways. Ads are placed all around. Websites are constructed in ways that humans respond to best, as data informs of how consumers behave and how to improve statistics such as conversion and click-through rates (Fogden, 2023). Search engines form something of a marketplace for webpages where online auctions are used to determine what will be shown. Marketeers use search-engine optimization (SEO) and paid search-engine advertising (SEA) to win auctions and feature amongst the first results (Eology, n.d.). This is how it has been. Now, one development shows potential to change it all. Since February 2023 Bing is powered by Open AI’s GPT-4, an advanced AI that Bing uses to improve its search engine and to act as a copilot and chatbot (Mehdi, 2023). This approach can forever change the way we use search engines and how results are generated. It could even change the way we interact with the internet altogether.

Right now, marketeers target us directly. Their methods are based on getting information directly to their target audience in the most appealing way. The intervention of AI may change this. Granted, an AI like GPT-4 is trained on pre-existing datasets and does not have direct internet access, so its responses cannot be influenced so easily. I asked Bing AI, which connects to Bing’s search engine, whether its response is influenced by paid search advertising, and it still categorically rejects the possibility. According to it, Bing AI and even the search results it might draw on only try to use the most relevant and reliable sources to provide information. Additionally, it says that any ads and sponsored links are filtered out by its internal tools. Perhaps advertising does not influence it, but still search engine optimization can help websites appear more relevant and end higher up the results. In this way internet marketing already focuses on convincing the search algorithm of a website’s value, not the user directly.

Regardless, we might get to a point where we rely on AI assistants to get all our information, or where search engines are run entirely by an AI which browses the internet, filters through information, and presents us with the best results. AI certainly has a promising future in real-time content moderation (Darbinyan, 2022). And, according to Santiago (2023), marketeers can even use it to protect their brand. But when marketeers themselves are the ones who need to get information through an AI gatekeeper, how will they respond? Many current strategies can make content appeal to humans, but what will the AI respond to? If AI is the middle-man, marketing efforts might have to be constructed in such a way that AI filters and retells information in the way marketeers ultimately want to reach consumers. It becomes important to consider what the AI will respond to, what it will need to see the information’s value.

Perhaps AI ushers in the end of traditional online marketing to consumers. Perhaps AI will simply assess an offer at its true value, and recommend it only if there is a good fit with the consumers needs, be it a new chair, information, or entertainment. Think about recommendations on social media platforms such as TikTok, which use algorithms that carefully select content that a user will probably like. This might be a preview of how we will receive all information: moderated by AI, and it could be a win-win situation. Marketeers could rely on AI to do the work of targeting, personalizing, and distributing content to the right audience, while users can rest easy knowing the information is of utmost relevance. AI could moderate content better than either marketeers or consumers themselves ever could.

References

Darbinyan, R. (2022, June 14). The growing role of AI in content moderation. Forbes. https://www.forbes.com/sites/forbestechcouncil/2022/06/14/the-growing-role-of-ai-in-content-moderation/.

Eology. (n.d.). SEA know-how: How to use search engine advertising unerringly! Eology Magazine. Retrieved October 16, 2023, from https://www.eology.net/magazine/sea-know-how#jump_function.

Fogden, T. (2023, April 14). What makes a good website? 12 must-haves. Tech.co. https://tech.co/website-builders/what-makes-good-website.

Mehdi, Y. (2023, March 14). Confirmed: the new Bing runs on OpenAI’s GPT-4. Microsoft Bing Blogs. https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4.

Santiago, E. (2023, April 7). AI content moderation: How AI can moderate content + protect your brand. HubSpot. https://blog.hubspot.com/marketing/ai-content-moderation.

Please rate this

ChatGPT can write assignments just as well as students can, and that’s the problem

1

October

2023

No ratings yet.

Up to a few weeks ago I had never personally used generative AI, but having tried it now I am equally excited and disappointed. Chat GPT is like the most intelligent first year student in the class, full of raw potential, but it doesn’t really know anything.

There are many reports of ChatGPT passing academic tests. Terwiesch (2023) indicates that ChatGPT usually produces 50 to 70 percent correct answers and he had previously found that it can pass an MBA program at the Wharton School where he is a professor. From my limited experience it is obvious that ChatGPT has a lot of information but it seems to me that it doesn’t really know anything. Ask it any question and it will give an information packed answer which seems about right, just like any proper university student can. However, the fact that ChatGPT or a student can argue any point does not make it correct. Tell ChatGPT that it is wrong or challenge it and it will smoothly adjust its story to fit your criticism or additions. ChatGPT will not readily make claims that are factually inaccurate, but certainly makes claims that can be (academically) spurious. As part of my AI experiment I decided to double check my answer to a homework assignment question by posing it to ChatGPT. It was plain wrong.

To someone who is new to these concepts the first answer seems quite valid. After I questioned ChatGPT it did correctly identify the moral hazard problem, with all the required argumentation to validate its renewed claim. Chat GPT is a spineless pushover, and by correcting itself it reveals itself to be unreliable.

Another example also illustrates variance in its answers. Hoping to use ChatGPT as a study partner, I asked it to test my knowledge about transaction cost theory. It actually asked an excellent question (to give three determinants), but then applauded my answer even though I deliberately included one wrong determinant. When in a new chat I asked the same question back, ChatGPT luckily did not mention my false answer. And yet, it originally allowed me to think my answer was right when it was wrong.

While testing ChatGPT’s abilities to pass an MBA, Terwiesch (2023) found the same thing, though he applauds ChatGPT’s ability to correct itself after receiving human hints. He does call ChatGPT’s answer quality erratic and describes numerous mistakes that ChatGPT makes with utmost confidence in itself (Terwiesch, 2023). For this reason, Terwiesch (2023) recommends using AI for creative purposes where unpredictability is useful, but not for any serious assignment where accuracy is important.

While I used Chat GPT-3.5, OpenAI has since released GPT-4, which is supposed to produce more accurate and useful results (OpenAI, n.d.). Nevertheless, according to Murgia (2023) GPT-4 suffers the same limitations. Murgia (2023) cites a limited comprehension of context, inability to learn from experience, and mentions ‘hallucinations’ limiting its reliability.

My conclusion is simple. ChatGPT can indeed write assignments as well as students. But students can be wrong, despite being able to make wonderful arguments for any point. ChatGPT has the same problem. It can be very helpful in generating ideas and sparking thought, but it cannot be relied upon. In those cases when requested answers are neither simple facts nor completely open to argumentation, ChatGPT fails. Using it can be very helpful but I trust it less than my fellow students. At least they know how certain they are of their answer. My suggestion is to use ChatGPT as a support, but to rely on your own knowledge and research for anything that actually matters, like your education for example.

References:

Murgia, M. (2023, March 14). ChatGPT maker OpenAI unveils new model GPT-4. Financial Times. https://www.ft.com/content/8bed5cd7-9d1e-4653-8673-f28bb8176385.

OpenAI. (n.d.). GPT-4. OpenAI. Retrieved September 30, 2023, from https://openai.com/gpt-4.

Terwiesch, C. (2023, March 12). Let’s cast a critical eye over business ideas from ChatGPT. Financial Times. https://www.ft.com/content/591ad272-6419-4f2c-9935-caff1d670f08.

Terwiesch, C. (2023). Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course [White paper]. Mack Institute for Innovation Management at the Wharton School, University of Pennsylvania. https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Would-ChatGPT-get-a-Wharton-MBA.pdf.

Please rate this