If there is anything Donald Trump is right about, it is about the ubiquity of “fake news” in the media. With the introduction of shareable online media content, social media platforms like Facebook and Twitter have been the breeding ground for “fake news, or “false news stories, often of a sensational nature, created to be widely shared online for the purpose of generating ad revenue via web traffic or discrediting a public figure, political movement, company, etc” (Steinmetz, 2017).
After the tragic shootings in Las Vegas earlier this month, Facebook and Twitter were two online media companies that faced criticism when fake online articles claiming the shooter was a “left-leaning liberal” or aligned with ISIS spilled out onto their social media platforms. With content spilling these false claims and the ability from users to boost content with Facebook or Twitter Ads respectively, this creates a potentially dangerous effect on society. We’ve seen how something like one off-hand tweet from the right person can cause the stock market to react, what happens to society when something so polarizing is spread and believed? And why are we so quick to believe them?
Recent research from Yale University has shown that the illusory truth effect is in effect in these situations. The illusory truth effect (1970) is a studied behaviour that suggests that as an individual hears the same thing a few times, their brain becomes faster to respond to it and can often misattribute that repetition as a sign that what they are seeing is true. When combined with “fake news” this could have very dangerous implications. For example, currently there are numerous investigations looking into whether ad buys promoting “fake news” that were designed to sway elections were effective in amplifying divisive messages that came to play in these elections.
Now I know what you’re thinking – “Sure I can see how the blue-collar working class living somewhere in the south of the US can believe the Breitbarts of this world, but I’m university educated. I can tell the difference.” Can you really though? And is it always so obvious. The Economist recently came out with a Fake News quiz – https://www.economist.com/blogs/graphicdetail/2017/04/daily-chart
See how many you can spot.
So, what can be done to fight fake news if our brains are predisposed to believe things we see multiple times? Much debate has surrounded around what role online media players like Facebook and Twitter should play in this debacle, and they have faced much political pressure from governments to provide more security to filter out these fake news, while others question where these media platforms should be regulated under rights to free speech in certain countries. Regardless of your thoughts on whether it is these company’s responsibility to fight this war against fake news, here are some readings highlighting how you can fight fake news with your own news-reading:
https://www.nytimes.com/2017/09/18/business/media/fight-fake-news.html
http://uk.businessinsider.com/facebook-how-to-spot-fake-news-2017-4?r=US&IR=T
References
http://time.com/4959488/donald-trump-fake-news-meaning/
https://www.wired.com/story/should-facebook-and-twitter-be-regulated-under-the-first-amendment/
https://www.ft.com/content/030184c2-a7f1-11e7-ab55-27219df83c97
Hey Kim!
Two quick points to add/ask.
Do you think that fake news spreads faster than real news – Fake news, besides being intentionally misleading is often designed to be more shocking. This can lead to more people sharing it and increases the chance of a given story going viral. Given that fake news articles have more room (as the truth is not a limiting factor) to come up with clickable stories, do you think that this makes them easier to share compared to news that is constrained by the truth? And is online media more susceptible to this?
Do you think that the way information is filtered for us makes it more likely that we see fake news – We all know that Facebook and Google collect information about our online behavior and that it feeds us with information we already agree with (increasing our confirmation bias). Do you think that with the current algorithms fake news is more likely to be recommended to us, especially with the way that fake news is usually created to push a certain political agenda?
Also, I got 10/12 – I’M NO FOOL! The chicken crossing the road and the kitten perfume got me 😛
Hey Loïc!
Nice job! And good questions. Let me try my best to answer them.
I think fake news has the potential to spread faster than real news because yes – like you say – it is created to be shocking. I think or I hope that there’s a human element to it where people will start to be more skeptical of what they read now that they know fake news is a big problem. With all of these investigations of fake news affecting elections, and all of these cases of fake news stories, I hope that it gives people pause to consider the veracity of what they’re taking in before they share /like , etc.
I think it’s also important to consider the fact that a lot of these “news stories” are backed by online advertising campaigns rather than being organic. These days online ads are harder to separate from real content. Native advertising, or advertisements that were made to blend in to the platform upon which it appears, is a very lucrative form of advertising. And it’s effective. And it’s the perfect type of advertising for fake news because sometimes you find these ads in the same pages that you get your real news. So yes I think online media is more susceptible to fake news not just because people use these platforms and get their news there but also because a lot of these companies (Google, Facebook, Twitter) are also advertising businesses. I think we are already seeing these algorithms working with fake news being recommended to people who are likely to “convert” or likely to buy into a certain political agenda. And I think we would be surprised by how much of an affect it has that we can’t see or measure.
Hi Kim, first of all thank you for this well written analysis of fake news.
I also agree that fake news present a big threat to our society and are much more difficult to spot than one thinks. Over the last year, especially after the US election, the big social platforms have promised to develop better systems to stop these stories to spread. However, while it might be achievable to stop fake news from benefiting of paid advertising campaigns, I believe that it is inherently difficult to stop fake news that are organic because that could easily interfere with individuals posting ‘normal’ content. Throughout your research, did you find out about any ways the big social networks are planning to do filter for fake news?
I would also like to add that fake news are not only politically dangerous but can potentially also impact financial markets. Last week for instance after the Dow Jones twitter account got compromised, a post was published which stated that Google wants to buy Apple. Trading bots reacted to this news and for a brief moment, the Apple stock went up by more than $2!
https://techcrunch.com/2017/10/10/dow-jones-said-that-google-was-buying-apple-the-algos-bought-it/?ncid=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=sfgplus&sr_share=googleplus&%3Fncid=sfgplus
As such attacks become more frequent the pentagon has also stated that they have become more worried than ever that fake news could lead to a financial market crash.
http://www.businessinsider.com/pentagon-stock-market-crash-darpa-2017-10?IR=T
In order to prevent fake news to spread, do you think much more severe punishments should be imposed?
Hey Laurin,
Great point and good questions! Regarding how big social networks are planning to filter for fake news: from a corporate perspective, is it in their best interest to filter fake news completely? These are paying customers paying for ads that are converting (which means these networks are getting more revenue per “click” or interaction). I think to some extent they have been slow to take this threat seriously because it brings them revenue. Furthermore, I know they have been working on better filtering in their systems and hiring more people to screen through their ad approvals. But in the end, I’m not sure there is one simple solution at the moment or enough of an incentive for these companies to fix the problem.
On top of that, there is the question of “freedom of speech” – where should these social networks draw the line between allowing someone their right to free speech and keeping them from causing massive chaos with their words online? And you’re right about punishments, perhaps more severe punishments could deter this kind of advertising. But what kind of punishments can be imposed? Shutting down an account? You can easily buy a “farmed” advertising account on these networks for a few grand and continue doing the same thing on a different account. Maybe I am pessimistic but I think talking about how destructive fake news can be and how it’s everywhere, educating people on how to spot fake news, that’s the best counter to this kind of practice.
Here’s an article about some of the practices they are taking to filter for fake news:
https://www.nytimes.com/2017/10/17/technology/google-fake-ads-fact-check.html
Hi Kim!
I have to say that fake news are going to be one of the greatest challenges of our generation. I think children need to be taught from a young age about freedom of press but also how to keep a critical eye on the abundance of information we have access to.
What I think creates such a challenge is the idea that a large amount of people, whatever their country of origin is, tends to read the press that confirms what we already believe in, and the way they view society. For instance, in France, people who tend to be be politicly far left will read magazines such as Libération or l’Humanité while a right-wing person will maybe read Le Figaro.
That make me wonder whether some people are really believing fake news or do they want to believe them because it could confirm their view on society?
Hey Romane,
Your example in France is a very good point! Fake News or biased news has been around long before the internet and internet advertising. I agree and think that people are also more prone to believe these biases because they are in line with their view of society. Which is another reason why we are so susceptible to the effects of these practices. I also think this shows us how important it is for us to be more well-informed and critical about what we are reading because in the end this kind of bias has been around for ages (albeit not quite as outright false as we are seeing it now). Perhaps we can’t just rely on Facebook and Google to have better filters.