“Fake news”. This term has already become an inalienable part of our life. False information on social media has become so common that we already perceive it as truth. This phenomenon in psychology is referred to as illusory effect. Most people understand it, but the least of them know how “fake news” impacts their daily decisions, actions, and lives.
About 52% of online readers in the United States claim that they come across the untrue information on social media and other online platforms regularly, while 34% report fake news occasionally (Statista, 2018). “Fake news” propaganda is an epidemic that has no age, race, sex, or religious preferences. However, specific groups of people can be targeted by social media individuals, organizations, commercial ventures, or even political parties that chase their own goals, suchlike popularity, revenue, or, as it happens, presidential elections.
Obviously, fake news boosts engagements in a faster and wider way. The reason is fake information being more appealing for users than truth. According to the studies based on 126,000 tweets on Twitter, the chance of fake reports to be retweeted is 70% higher than the real ones (Pressner Kreuser, 2018). So, what are the ways to cease disinformation and put an end to being someone’s target?
Considering the established circumstances, there are 3 main participants involved in this process: authors, readers, and platforms wherein fake news is extensively spread. We cannot interrupt anyone from writing fake news. Readers have also failed to prevent the spread and harmful impact of false information. Moreover, readers are causing more problems sharing fake news. However, in the modern world, when people fail, machines come to help. Machine learning and artificial intelligence tools can assist platforms to combat against untruth and stop the menace of fake news. This can be possible by introducing data-driven systems that will be able to analyze the information, facts, keywords, and even punctuation provided in the clickbait articles or digital newspapers and compare these data to authentic sources.
Nevertheless, one of the AI-based algorithms that captured everybody’s attention was created by Michael Bronson, professor at Imperial College London and founder of Fabula AI fake news detection company. His approach is based on the rapidity of dissemination of online disinformation in social networks rather than the content of the news. This method enables the system to predict fraud much faster and interrupt its distribution on early stages while a data-driven approach performs after the fake news is already distributed. Fabula AI shows incredible results with a fraud detection rate of 93% (Lomas, 2019), which is why it was acquired by Twitter in 2019 for $22.8 million (Price, 2019).
I believe that machine learning and AI can find a solution to the issue of fake news. However, people should not stand as an obstacle by engaging or sharing false information publicly. Instead, we have to report or block such type of data to decrease the destructive threat of being manipulated or controlled by someone.
References:
Statista, 2018: https://www.statista.com/statistics/649234/fake-news-exposure-usa/
Pressner Kreuser, A. 2018. Fake News Spreads Faster and 100 Times Further Than the Truth, According to Science: https://www.inc.com/amanda-pressner-kreuser/whos-really-responsible-for-spread-of-fake-news-answer-might-shock-you.html
Lomas, N. 2019, Fabula AI using social spread to spot “Fake news”: https://tcrn.ch/3d1ODmi
Price, R. 2019, Twitter acquired a startup for $ 22.8 million last quarter, and it’s probably this London AI company: https://bit.ly/3jAMEbh
While AI on the one hand has definitely made progress regarding the recognizing fake news, it on the other hand allows for the construction of more and more realistic fake news. Machine learning and AI even make video manipulation possible that portray a person saying things he or she never said in real life. Moreover, this ‘deepfake’ technology is able to alter the reporting of fake news, allowing organizations that produce fake news not to get caught. This whole process is blurring the line between fiction and reality more than we have ever seen before.
Personally, I think that part of the whole problem is that social media users are not aware how low the bar is to create fake news. Especially as the technology is improving every day. We are being warned about fake news, and most of realise that it is an actual problem, yet we consider ourselves rational enough to spot what is fiction and what is not while this might not really be the case. Interestingly, users tend to solely spot fake news when it goes strongly against their believes. Hence, this effect could lead to, for example, stronger political standpoints over time as we are not able to spot fake news that fits our picture of the world. Furthermore, it definitely harms our trust in the media, which makes it hard for users to know where they can actually find accurate information.
AI is making detection of fake news easier but also making the creation of fake news increasingly easy and believable. I think that the coming years we will become subjects to more fake news and education about this subject is important.
Hi Javid. It is truly amazing to see how AI can help tackle the “fake news” issue. In addition to your blog I would also argue that media corporations are at the center. Since online media organization is all about creating the greatest number of reaction by its community, “hot” topics are crucial. As you illustrated, a fake report has 70% more chances to be retweeted than a true one. Though sometimes fake, shocking information has a tendency to make the community react. Therefore such online media corporations are eager to find such provoking topics, even if it might be fake. Governmental regulations could then play an essential role in tackling the power of these corporations and regulating their unethical business model.