“Fake news”. This term has already become an inalienable part of our life. False information on social media has become so common that we already perceive it as truth. This phenomenon in psychology is referred to as illusory effect. Most people understand it, but the least of them know how “fake news” impacts their daily decisions, actions, and lives.
About 52% of online readers in the United States claim that they come across the untrue information on social media and other online platforms regularly, while 34% report fake news occasionally (Statista, 2018). “Fake news” propaganda is an epidemic that has no age, race, sex, or religious preferences. However, specific groups of people can be targeted by social media individuals, organizations, commercial ventures, or even political parties that chase their own goals, suchlike popularity, revenue, or, as it happens, presidential elections.
Obviously, fake news boosts engagements in a faster and wider way. The reason is fake information being more appealing for users than truth. According to the studies based on 126,000 tweets on Twitter, the chance of fake reports to be retweeted is 70% higher than the real ones (Pressner Kreuser, 2018). So, what are the ways to cease disinformation and put an end to being someone’s target?
Considering the established circumstances, there are 3 main participants involved in this process: authors, readers, and platforms wherein fake news is extensively spread. We cannot interrupt anyone from writing fake news. Readers have also failed to prevent the spread and harmful impact of false information. Moreover, readers are causing more problems sharing fake news. However, in the modern world, when people fail, machines come to help. Machine learning and artificial intelligence tools can assist platforms to combat against untruth and stop the menace of fake news. This can be possible by introducing data-driven systems that will be able to analyze the information, facts, keywords, and even punctuation provided in the clickbait articles or digital newspapers and compare these data to authentic sources.
Nevertheless, one of the AI-based algorithms that captured everybody’s attention was created by Michael Bronson, professor at Imperial College London and founder of Fabula AI fake news detection company. His approach is based on the rapidity of dissemination of online disinformation in social networks rather than the content of the news. This method enables the system to predict fraud much faster and interrupt its distribution on early stages while a data-driven approach performs after the fake news is already distributed. Fabula AI shows incredible results with a fraud detection rate of 93% (Lomas, 2019), which is why it was acquired by Twitter in 2019 for $22.8 million (Price, 2019).
I believe that machine learning and AI can find a solution to the issue of fake news. However, people should not stand as an obstacle by engaging or sharing false information publicly. Instead, we have to report or block such type of data to decrease the destructive threat of being manipulated or controlled by someone.
References:
Statista, 2018: https://www.statista.com/statistics/649234/fake-news-exposure-usa/
Pressner Kreuser, A. 2018. Fake News Spreads Faster and 100 Times Further Than the Truth, According to Science: https://www.inc.com/amanda-pressner-kreuser/whos-really-responsible-for-spread-of-fake-news-answer-might-shock-you.html
Lomas, N. 2019, Fabula AI using social spread to spot “Fake news”: https://tcrn.ch/3d1ODmi
Price, R. 2019, Twitter acquired a startup for $ 22.8 million last quarter, and it’s probably this London AI company: https://bit.ly/3jAMEbh