Algorithms are fully integrated into our everyday life. Your social media apps are specifically tailored to your needs, advertisements are targeted, and even on the street, your safety is harboured with traffic lights. Even though algorithms offer exciting possibilities and benefits, it does not come without danger. There are plenty of examples in which algorithms might do more harm than good. Consider the death of 14-year-old Molly Russel who took her own life in 2017 after watching harmful content online (Milmo, 2022). Another example is the TikTok algorithm that seems to actually promote misogynistic content, such as influencer Andrew Tate, on the personalized feed of users (Das, 2022).
While algorithms have been on the hot seat for a while, no resolution or common consensus seems to be reached. Lately, the UK government has been working on passing an online safety bill (Mimlo, 2022). In the US, the supreme court is hearing new cases in which Google and Twitter are sued as they would have encouraged terrorist attacks (Dearing, 2022). To address the dangers of algorithms, let’s go back to the basic definition of an algorithm: “a step-by-step procedure for solving a problem or accomplishing some end” (Merriam-Webster, n.d.). As such, an algorithm is nothing more than a program that takes input and classifies this (Spichak, 2022). For this classification, human thinking is needed. Consequently, algorithms are made by humans to mimic human thought and cognitive processes. Hence, issues with algorithms are caused by the development, training, and benchmarking of data. Specifically, dangers are seen in algorithmic bias and dangerous feedback loops (Dickson, 2022). Machine learning algorithms need quality data for training and accuracy purposes. When you do not have enough quality data for a specific group, this group is often most hurt by it. Additionally, the feedback loop causes more bad-quality data as the AI algorithm makes wrong decisions, which in return is used again to further develop the algorithm, and this causes more prejudice (Dickson, 2022).
Most resolutions against algorithm bias and harm focus on detection, mitigation, and regulation (Lee et al., 2019). Consumer rights and innovations need to be balanced carefully. To elaborate, all algorithms need to ensure fair and ethical deployment. Furthermore, regulation recommendations would include digital practices in civil rights, promoting anti-bias sandboxes, and using safe harbours (Lee et al., 2019). In the end, algorithms are nothing more than programs developed by humans. As a result, it is up to us to avoid any harmful effects of it.
References
Das, S. (2022). How TikTok bombards young men with misogynistic videos. Retrieved from https://www.theguardian.com/technology/2022/aug/06/revealed-how-tiktok-bombards-young-men-with-misogynistic-videos-andrew-tate?CMP=Share_iOSApp_Other
Dearing, T. (2022). The Supreme Court is hearing cases on dangerous algorithms. Retrieved from https://www.wbur.org/radioboston/2022/10/06/october-6-2022-rb
Merriam-Webster (n.d.). Definition of an algorithm. Retrieved from https://www.merriam-webster.com/dictionary/algorithm
Milmo, D. (2022). TechScape: Social media firms face a safety reckoning after the Molly Russell inquest. Retrieved from https://www.theguardian.com/technology/2022/oct/05/techscape-molly-russell-inquest
Spichak, S. (2022). The dangers of ai: bad algorithms are a more immediate danger than Ultron. Retrieved from https://thedebrief.org/the-dangers-of-ai-bad-algorithms-are-a-more-immediate-danger-than-ultron/