Continuous breakthroughs in AI technology allow us to tackle ever more complicated problems with it that were previously exclusively within the domain of human cognitive problem solving. As advances in the technology have marched along from the first AI programs in the 1950s that could play amateur-level checkers, the excitement for the possibilities held within AI grew in parallel to the complexity of tasks it was able to solve. One key component of solving complex problems effectively however, which is intrinsic to human nature, is understanding the context of the surrounding world in which you are trying to solve the problem. Although humans can make AI more intelligent, in the sense that it can complete evermore complicated tasks at scale, the desired outcomes are increasingly more volatile as AI tries to find the most effective answer without necessarily a regard for the natural world.
A recent example of this is the public outcry over the ‘A-level’ results which were predicted by AI for the first time this year. Normally students would sit ‘A level’ exams, based on which they would receive offers from universities. Prior to these exams, teachers would provide estimated grades which students could already use to get preliminary offers from universities. However due to the public health crisis caused by Covid-19, this system was disrupted and the UK’s assessment regulator Ofqual was tasked to find another way for students to obtain their ‘A-level’ results. Their solution was to use a mathematical algorithm which used two key pieces of information: “the previous exam results of schools and colleges over the last 3 years, and the ranking order of pupils based on the teacher estimated grades” (Melissa Fai, 2020). The result? Almost 40% of all 700,000 estimated scores were downgraded, causing numerous students to be rejected from universities they had been conditionally accepted to (Adams, 2020). Furthermore, the majority of the downgraded students came from state schools.
Although the UK government announced in August this year that they would reverse the grading to match more closely with the estimates provided by the teachers, its clear that for some of the students the damage has already been done. Affected students would not go to their desired university, or decide not to go to university at all and postpone their higher education by at least a year. Looking back critically, its evident that the ethical impacts of the mathematical algorithm were not considered before it was launched or simply ignored. Given the near limitless potential of AI in all facets of our lives in the future, its crucial that ethical considerations become a central component of the AI development process.
References
Adams, R. Barr, C. Weale S. (2020). ‘A-level results: almost 40% of teacher assessments in England downgraded’, The Guardian, 13 August. Available at: https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-results-downgraded (Accessed: 8 October 2020).
Fai, M, Bradley, J, & Kirker, E 2020, Lessons in ‘Ethics by Design’ from Britain’s A Level algorithm, Gilbert + Tobin, viewed 8 October 2020,< https://www.gtlaw.com.au/insights/lessons-ethics-design-britains-level-algorithm>.