As you’re reading this blog, you’ve probably heard loads about the emergence of data-driven marketing. A development that prides many companies by enabling them to meet customer demand in the most satisfying way possible. But it’s not just businesses that make use of big data and other innovations to influence people’s behaviour. Although it might be less visible – for a reason – data-driven strategies combined with other technological innovations have dominated the field of electoral marketing in politics.
While electoral marketing has always been data-driven to some extent – primarily demographics and geographics – big data analytics has opened the door to the extremes: psychographics. Where once large surveys were needed to determine and categorize our personalities we are now consciously and unconsciously producing these data around the clock.
Last year’s political highlights – Brexit and Trump – have both put this to use. Big data analysis enabled them to show the right message to the right individual in a uniquely personalized fashion. The CEO of the company hired in both political campaigns to perform these analytics even claims that each and every queer Tweet by Trump was in fact data-driven.
In addition, AI’s are becoming increasingly fast and intelligent in creating realistic content. These AI’s, just like big data-analytics, were used in the Brexit campaign. In this campaign swarms of bots were put to the task of creating and distributing misinformation and fake news in order to manipulate public opinion.
As you can imagine, the implications of just these two technologies can be massive. The degree to which they already have been significantly influential is, however, unknown and therefore arguable.
The quantity and one-sidedness of news stories (either real or fake) that are presented to potential voters have already increased through the use of social media. Improving the possibilities to further personalize this communication through either real or fake news makes form a serious potential threat to a ‘clean’ democracy. A threat due to preventing potential voters to hear any other news other than what most appeals to them. This inherently prevents them from being able to up their own mind who to vote for and shifts this process to that of the analyst: who should a specific individual vote for?
Some part of the public is becoming increasingly familiar with the (business) applications of AI, partly due to its commercialization by Apple and the likes of Elon Musk. For the majority, however, AI is still nothing more than an interesting topic for a science fiction movie. And as we all know the general rule of a democracy is that the majority wins.
To mitigate these risks and in order for the public to become aware of their bias I pleed that it is necessary for a public that is better educated about these topics and their implications on our daily lives society.
Should you be interested in the way your personality traits (OCEAN-method) can be derived from your Facebook account, please participate in the original Cambridge-research at https://applymagicsauce.com/
Sources:
http://www.independent.co.uk/news/long_reads/artificial-intelligence-democracy-elections-trump-brexit-clinton-a7883911.html
https://www.economist.com/news/science-and-technology/21724370-generating-convincing-audio-and-video-fake-events-fake-news-you-aint-seen?zid=291&ah=906e69ad01d2ee51960100b7fa502595
https://motherboard.vice.com/en_us/article/mg9vvn/how-our-likes-helped-trump-win
The CEO of Cambridge Analytics about the use of big data and psychographics:
Interesting post Bram. It is definitely troubling to see mass-manipulation through popular social channels such as Facebook, especially when it is through bot-accounts. I think that governments need to put more pressure on Facebook to limit fake-accounts by making it more difficult to log-in for example, or to have had an active account for at least X period of time before you can make more than X comments in a public group. It may also be a difficult challenge for Facebook to implement new policies, as hackers and well-funded fake-news organizations can usually find ways around this.
Thank you for your post on this interesting topic Bram.
In the end you pleed for a better education of the public on these technologies. I agree with you that AI needs to be more than a buzzword for the majority of people but I am unsure whether this goes far enough.
Normal people scrolling trough their Facebook News Feed can be educated that not everything they see is the truth but will they fact check on these single statements by politicians? Probably most of them will not and even though some might do this, the instant they get the message from a post they will remember a tiny bit of it and as you pointed out in your post there might have been several bots at work those persons will see this and similar posts not only once but a multiple of times and every time the message gets reinforced.
In order to prevent this we need more than just the people but also the platforms and the governments and even political parties paying attention to this issue and making an effort to prevent it.
The platforms by enhanced monitoring of their content and governments by establishing rules that hold the platforms accountable for what is on their website. For politicians I hope that instead of shaping their statements to phrases that poll best as in the last elections on House of Cards (season 3, given the cover of this post I assume you remember the scenes with Claire’s hairstyle and the polling on that) they stand up for their principles and try to win people with integrity even though sometimes this might seem very idealistic.
Dear Bram,
There are always been multiple types of echo chambers before Facebook, AI and Big Data existed. It is a human tendency to favor the information that confirms our beliefs and hypotheses. Confirmation bias will always be present and AI or Big Data can’t do anything about it. In fact, even AI can be biased. Someone wrote a post here about how AI can be racist, I encourage you to read it.
With regards to regulating the news that get selected, I think that that’s a controversial issue and will most likely backfire.
First, where should we draw the line between fake and non-fake news?
Even well respected sources of information such as New York Times and Associated Press have to make corrections from time to time.
Last one I can remember was about the number of agencies that backed up the so called “Russian collusion”. They corrected from 17 intelligence agencies to just 4. Should the New York Times be blacklisted because one of their articles was not accurate or “non-fake”? I don’t think so.
What about anonymous sources? How can you track them? How can you be sure whether they are fake or not? Should these be regarded as fake news? What about articles that do not report facts but are opinions, such as economic or political opinions? The people behind assessing what’s fake and what’s not can also have their own biases.
You can’t filter what’s “clean” or what’s not, since it is very hard to draw a line. Censorship will reinforce the biases of those censored or supporting the censored sources, by making them believe they are being silenced and oppressed, due to such information being censored because it damages the opposition.
I’m not saying I’m supporting the rise and threat of fake news, I’m just being pessimistic and I believe that the only way to deal with misinformation is by educating the people about being cautious with the information they read, and educate them to properly track the sources and the legitimacy of the news.