Social media allows for faster distribution of news, and is a whole new way people write and read about what is happening. A major dilemma for social media platforms is how to prevent misinformation. It does not only cost a lot of money, it is difficult to find and remove such content. Also, it limits the freedom of speech, because it is difficult to state whether something is actually false information. Facebook, for example, has had a lot of problems with this issue mostly regarding the elections. As it is possible to have a very narrow target audience when advertising on Facebook, this makes it even easier to spread misinformation. However, Facebook does not limit most of these posts. People can report posts, but it still has to be checked before it gets removed. Twitter is now researching whether it helps to promt adding your own text before retweeting to prevent further distribution of unknown sources.
The question that arises is how responsible the social media platforms are for this problem. Should they be required to invest time and money to fix this issue, which will at the same time limit their ads and users because their posts get deleted, or should the government finance and check these posts? Currently, these platforms comply with preventing misinformation in order to not be shut down by governments, however the result of this is that there is a principal-agent problem.
One possible solution that would both limit fake news and prevent censorship would be to introduce the rating of posts and comments. These ratings could be done by professionals, other users, or using algorithms. This would allow people to see all posts without censorship. Of course, some problems with this are that people will rate posts based on their opinion and not on whether it is fake, but this is a better alternative because currently this is still happening and results in those posts being deleted.
Social media: misinformation or censorship?
9
October
2020