In the past years artificial intelligence has moved from science into our daily routine. We stream series and movies based on personalized recommendations, unlock our phones with facial recognition and we ask ChatGPT what we should eat for dinner. But there is a hidden side to all these advantages. These systems rely on enormous amounts of data, often personal. The question that keeps coming back is: can we trust AI with our data?
The risks are there to see. Many AI models are using data to train themselves; this data is scraped from the internet. Using social media posts, blogs, images that are uploaded over all these years. Most of the internet users never gave explicit permission for this, but ones you put something on the internet it is most likely never going to disappear. Fragments of our digital life’s are training and shaping up the behaviour of large-scale commercial systems (Stanford HAI, 2024). Even when data is anonymized, AI models can infer sensitive information such as health conditions, political preferences, or religious beliefs from patterns hidden in seemingly harmless details (F5, 2024). In some cases, weak security can cause private information to leak. These risks go beyond the power of the individual.
Europe has tries to address these challenges through regulation. The General Data Protection Regulation (GDPR) has already set the standard for how organizations should handle personal information, reguiring transparency, consent and data minimization (IBM, 2024). More recently the EU Artificial Intelligence Act, which started in August 2024, has gone a step further by classifying AI systems according to risk. High-risk systems, particularly those with direct impact on safety and fundamental rights will now face stricter rules around documentation, data governance, and transparency (European Parliament, 2024). Together, these two frameworks aim to create a regulatory environment where AI innovation can coexist with strong protection of individual rights (INTA, 2024).
But even with these developments, there will remain a constant tension between convenience and protection. Everytime we embrace a smarter, faster, more personalized service, we implicitly decide how much privacy we are willing to trade. For me, the most important question for the future is not simply whether we can trust AI with our data, but what kind of society we want AI to help create. If we choose carefully, privacy and progress can grow together. If not, the price of convenience may turn out to be far higher than we expect.
References
European Parliament. (2024). The EU AI Act: First regulation on artificial intelligence. Retrieved on 14 September 2025, from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
F5. (2024). Top AI and Data Privacy Concerns. Retrieved on 14 September 2025, from https://www.f5.com/company/blog/top-ai-and-data-privacy-concerns
IBM. (2024). Exploring privacy issues in the age of AI. IBM Think Insights. Retrieved on 14 September 2025, from https://www.ibm.com/think/insights/ai-privacy
INTA. (2024). How the EU AI Act Supplements GDPR in the Protection of Personal Data. Retrieved on 14 September 2025, from https://www.inta.org/perspectives/features/how-the-eu-ai-act-supplements-gdpr-in-the-protection-of-personal-data/
Stanford HAI. (2024). Privacy in an AI Era: How Do We Protect Our Personal Information? Retrieved on 14 September 2025, from https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information