LinkedIn Data and AI Training: A Growing Privacy Concern

1

October

2025

No ratings yet.

The Dutch privacy authority (Autoriteit Persoonsgegevens, AP) recently advised LinkedIn users to opt out of the platform’s new policy that allows personal data to be used for training artificial intelligence (AI) systems (AP, 2025). This move raises an important question: at what point does innovation cross the line into exploitation? Tech companies often frame data collection as progress, but in reality, users are rarely given meaningful choice or control over how their information is used.

Once information is fed into AI systems, removal becomes nearly impossible. The AP explicitly warned that individuals risk losing control over their data (AP, 2025). This is not limited to LinkedIn. Companies like Apple, Nvidia, and Anthropic have also used online content, including social media and video platforms, to train their systems, raising similar questions about transparency and permission (Gilbertson & Reisner, 2024).

The legal situation remains uncertain. In the United States, courts have begun to reassess how copyright and fair use apply to AI training, as shown in the recent Thomson Reuters v. Ross Intelligence case, where earlier decisions were reversed (D. Levi et al., 2025). These disputes illustrate how quickly AI has outpaced existing frameworks, leaving users’ rights exposed.

Beyond privacy and law, there are broader social risks. If companies train models on datasets skewed toward particular groups or perspectives, the outputs will reflect and amplify those biases (Dilmegani, 2025). That means decisions in hiring, finance, or even healthcare could be influenced by flawed or unrepresentative data.

The central issue is one of accountability. Companies argue that data-driven training is necessary for innovation, but innovation cannot come at the expense of trust and fairness. Opt-outs and transparency should be standard practice, not hidden in settings menus. Without stronger safeguards, AI risks being built on practices that exploit rather than respect the individuals who provide the data.

Reading and researching this made me think more critically about the AI tools I use daily, like ChatGPT, Claude or Perplexity. It highlighted the importance of being aware of where models get their data and the potential consequences for privacy and fairness. If I were designing one, I’d make it clear what data is used and give people an easy way to opt out. This experience has made me more cautious and intentional about how I interact with AI tools, and how AI systems can quietly shape our privacy and fairness.

References:

AP. (2025). AP bezorgd over AI-training LinkedIn en roept gebruikers op om instellingen aan te passen. Autoriteit Persoonsgegevens. https://autoriteitpersoonsgegevens.nl/actueel/ap-bezorgd-over-ai-training-linkedin-en-roept-gebruikers-op-om-instellingen-aan-te-passen

D. Levi, S., Feirman, J., Ghaemmaghami, M., & N. Morgan, S. (2025). Court reverses itself in AI training data case. Skadden. https://www.skadden.com/insights/publications/2025/02/court-reverses-itself-in-ai-training-data-case

Dilmegani, C. (2025). Bias in AI: Examples and 6 Ways to Fix it. AIMultiple. https://research.aimultiple.com/ai-bias/

Gilbertson, A., & Reisner, A. (2024). Apple, Nvidia, Anthropic used thousands of swiped YouTube videos to train AI. WIRED. https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/

Please rate this

2 thoughts on “LinkedIn Data and AI Training: A Growing Privacy Concern”

  1. Love the relevance of this post.

    I remember reading about and encountering pre – conceived biases in ChatGPT’s responses because it was trained on data skewed towards certain demographics. Even the Ghibli Image fad had amplified these concerns (since a lot of users could observe biases in the facial image generation). But your post (and importantly so) sheds light on how this concern can impact something as consequential and people-centric as hiring. I can only hope that other countries (especially the ones outside EU since a lot of countries’ privacy laws are not even as strong as the GDPR) follow the Dutch Authorities’ example and encourage/enforce companies to at least display opt-outs explicitly.

    1. Thanks for the thoughtful comment! You’re right, bias in AI doesn’t just show up in abstract ways like text or images, but also in areas that directly affect people’s opportunities, like hiring and recruitment. Hopefully, stronger regulations combined with public awareness will encourage companies to be more transparent and responsible in how their AI systems operate.

Leave a Reply to 787652ad Cancel reply

Your email address will not be published. Required fields are marked *