Societal polarization due to Social Media in the USA – Who should take responsibility?

9

October

2020

No ratings yet.

Social media companies like Google and Facebook bear increasingly more responsibility in our society as they grow in size and influence. Even though they’re platforms and services were not designed to specifically manipulate or steer public opinion, they are increasingly confronted with the reality that they are. From seemingly minor issues such as political campaign emails being marked as “spam” in a prospective voters Google mail account (Newton, 2020) to concerns that algorithms, with the help of content moderators, on Facebook or Youtube are unfairly removing conservative content (Romm, 2020) these big tech companies are already under scrutiny in the United States from both major political parties. The irony of this criticism lies in the fact that these companies were left largely unregulated by the same government criticizing them today. Its due to this lack of regulation within the industry, that big tech companies focused the development of their algorithms towards narrow goals of maximizing users attention as this would allow them to make more money from advertisement because they can show more adds to users. Combining this business incentive with the goal of increasing the effects of network externalities explains how companies like Google and Facebook got into this situation. This unfettered pursuit to maximize user attention, has led to the proliferation of social media in society and has enabled a level of polarization today which is unprecedented in the history of the USA (DellaPosta, 2020).

In efforts to reduce the pressure governments put on them, Google and Facebook have developed more comprehensive policies for content moderation, working with policy makers and independent organizations. Facebook alone has committed to hiring 15,000 content moderators to enforce them (Thomas, 2020). Effectively this has transformed both media giants to becoming an independent online police force, with policies as its laws, and content moderators as its police force. Even though these policies were developed with key stakeholders in government, it raises questions around how society is, and should, function as governmental responsibilities become increasingly intertwined with big tech firms operations. Although governments are responsible for enforcing rules around freedom of speech, effectively this is done more and more by tech companies. From a radical point of view, these practices are undemocratic as big tech companies operate without oversight of elected officials, however, it can nevertheless be argued that these measures are necessary in the short term to allow policy makers to catch up and regulate the industry.

As social media platforms increasingly become the medium through which democratic societies express their opinions, they effectively become tools which can steer opinion. Because of this reality, I believe that governments should play a larger role in regulating these companies to create rules with penalties, as well as incentives, to reduce the polarization social media. One of the possible ways this could be done is by creating clear rules around content and advertising which similarly already apply to newspapers and network providers. However, these rules would also need to be enforced with financial penalties, such has social media companies having to pay back money they received for inappropriate content or advertising. The question ultimately arises: how long can the US government, and other governments around the world, allow social media companies to continue to self-regulate themselves? The time is ticking, and will likely not be much longer after the 2020 US election.

References:

DellaPosta, D. (2020) ‘Pluralistic Collapse: The “Oil Spill” Model of Mass Opinion Polarization’, American Sociological Review, 85(3), pp. 507–536. doi: 10.1177/0003122420922989.

Newton, C. (2020). ‘The tech antitrust hearing was good, actually’, The Verge, 30 July. Available at: https://www.theverge.com/interface/2020/7/30/21346575/tech-antitrust-hearing-recap-bezos-zuckerberg-cook-pichai (Accessed: 9 October 2020).

Romm, T. (2020). ‘Amazon, Apple, Facebook and Google grilled on Capitol Hill over their market power’, The Washington Post, 30 July. Available at: https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2ftechnology%2f2020%2f07%2f29%2fapple-google-facebook-amazon-congress-hearing%2f (accessed: 9 October 2020).

Thomas, Z. (2020). ‘Facebook content moderators paid to work from home, BBC, 18 March. Available at: https://www.theverge.com/interface/2020/7/30/21346575/tech-antitrust-hearing-recap-bezos-zuckerberg-cook-pichai (Accessed: 9 October 2020).

Please rate this

Ethical considerations from future development and dependence on AI

8

October

2020

No ratings yet.

Continuous breakthroughs in AI technology allow us to tackle ever more complicated problems with it that were previously exclusively within the domain of human cognitive problem solving. As advances in the technology have marched along from the first AI programs in the 1950s that could play amateur-level checkers, the excitement for the possibilities held within AI grew in parallel to the complexity of tasks it was able to solve. One key component of solving complex problems effectively however, which is intrinsic to human nature, is understanding the context of the surrounding world in which you are trying to solve the problem. Although humans can make AI more intelligent, in the sense that it can complete evermore complicated tasks at scale, the desired outcomes are increasingly more volatile as AI tries to find the most effective answer without necessarily a regard for the natural world.

A recent example of this is the public outcry over the ‘A-level’ results which were predicted by AI for the first time this year. Normally students would sit  ‘A level’ exams, based on which they would receive offers from universities. Prior to these exams, teachers would provide estimated grades which students could already use to get preliminary offers from universities. However due to the public health crisis caused by Covid-19, this system was disrupted and the UK’s assessment regulator Ofqual was tasked to find another way for students to obtain their ‘A-level’ results. Their solution was to use a mathematical algorithm which used two key pieces of information: “the previous exam results of schools and colleges over the last 3 years, and the ranking order of pupils based on the teacher estimated grades” (Melissa Fai, 2020). The result? Almost 40% of all 700,000 estimated scores were downgraded, causing numerous students to be rejected from universities they had been conditionally accepted to (Adams, 2020). Furthermore, the majority of the downgraded students came from state schools.

 

Although the UK government announced in August this year that they would reverse the grading to match more closely with the estimates provided by the teachers, its clear that for some of the students the damage has already been done. Affected students would not go to their desired university, or decide not to go to university at all and postpone their higher education by at least a year. Looking back critically, its evident that the ethical impacts of the mathematical algorithm were not considered before it was launched or simply ignored. Given the near limitless potential of AI in all facets of our lives in the future, its crucial that ethical considerations become a central component of the AI development process.

References

Adams, R. Barr, C. Weale S. (2020). ‘A-level results: almost 40% of teacher assessments in England downgraded’, The Guardian13 August. Available at: https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-results-downgraded (Accessed: 8 October 2020).

Fai, M, Bradley, J, & Kirker, E 2020, Lessons in ‘Ethics by Design’ from Britain’s A Level algorithm, Gilbert + Tobin, viewed 8 October 2020,< https://www.gtlaw.com.au/insights/lessons-ethics-design-britains-level-algorithm>.

Please rate this