LinkedIn has quietly incorporated user data into its generative AI training models without prior notice. This move was discovered by 404Media and other sources, which revealed that LinkedIn started using data to improve and develop its AI models before updating its privacy policy. The platform later added a section to its privacy policy that outlines how personal data may be used for AI training, development, and personalization. Despite this late update, users were not initially informed about the use of their data for AI purposes.
After the data collection for AI training was uncovered, LinkedIn introduced an option for users to opt out of this practice. This setting allows users to prevent their data from being used to train and fine-tune the company’s AI models. However, opting out does not affect any training that has already been conducted, meaning user data already used for AI purposes will remain part of the models. Additionally, users who choose to opt out can still use generative AI features, such as interacting with LinkedIn’s chatbot, though their personal data will not contribute to further AI training.
It’s important to note that users in the EU, EEA, and Switzerland are not subject to this practice due to strict privacy laws. LinkedIn clarified that it does not collect or use user data from these regions for AI training. As a result, users in these areas will not see the option to opt out since their data is already protected under local regulations.
LinkedIn’s data collection for AI training raises concerns about user privacy, following a pattern of similar issues with other tech companies. Google is facing legal action for allegedly misleading users about its data practices, while Meta recently paid $1.4 billion in settlements for privacy violations related to Facebook users’ biometric data. LinkedIn’s move to use user data for AI development without prior consent has sparked concerns, potentially leading to similar legal challenges in the future.
Reference: