Record Fine After Record Fine: Privacy in Data Collection and AI

19

September

2024

No ratings yet.

On the 3rd of September, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) fined the American Clearview AI 30,5 million euros, the second highest fine ever given by the DDPA (NOS, 2024). The fine was imposed on Clearview AI, a facial recognition software company, for building an illegal database containing over fifty billion pictures. The pictures were scrapped from the web, meaning everyone with an online presence could be in the database, even you. The record for the highest fine went to Uber only a week earlier. Uber shared data it had collected on its drivers with its headquarters in the United States, without taking proper safety measures. Uber is set to pay the DDPA a fine of 290 million euros (NOS, 2024a). These fines might pale in comparison to the ones imposed by the EU on tech giants like Google and Facebook. However, they contribute to an increasing pushback from regulators on tech companies and their usage of data. Besides regulators, consumers are also increasingly more concerned with their (online) privacy. An interesting point to mention here is that while their privacy concern has grown in the past two decades so has the voluntary sharing of data (Bartneck et al., 2020). 

As seen in the first example, AI companies collect vast amounts of data to operate and improve their AI models. While on the one hand, it does seem necessary if we want to be able to apply AI more and more in our daily lives, it does also come with notable risks. One of the primary issues identified by Bartneck et al. (2020) is the possibility that the gathered data is not used for its intended purpose. The surge of AI has also introduced new ways in which our data can be used. As the general usage of AI is still in its early days most of us are not fully aware of the issues these new uses might pose. These might include more obvious examples like impersonations and fake news, but also less obvious ones like predicting mortality. Bartneck et al. (2020) mention that AI has the potential to predict someone’s mortality by analyzing their movement. The resulting analysis could be used by or sold to undertakers or insurance companies. It is therefore important that users become increasingly aware of how their data is used. In order to realize this further transparency on data collection, safety and usage is required. 

This post is not meant as a plea against AI or the collection of user data. I would however like to make you think about what you want to share online and with whom. So maybe next time you will look into your cookie allowances instead of hitting the accept all button. 

References 

Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2020). Privacy Issues of AI. In SpringerBriefs in ethics (pp. 61–70). https://doi.org/10.1007/978-3-030-51110-4_8

NOS. (2024a, augustus 26). Privacywaakhond legt hoogste straf ooit op: 290 miljoen euro boete voor Uber. https://nos.nl/artikel/2534629-privacywaakhond-legt-hoogste-straf-ooit-op-290-miljoen-euro-boete-voor-uber

NOS. (2024b, september 3). Boete van privacywaakhond voor verzamelaar van miljarden foto’s van gezichten. https://nos.nl/artikel/2535633-boete-van-privacywaakhond-voor-verzamelaar-van-miljarden-foto-s-van-gezichten

Please rate this