Meta, the parent company of Facebook and Instagram, has announced a significant delay in its plans to train large language models (LLMs) using public content from adult users in the European Union. This decision comes after the Irish Data Protection Commission (DPC) requested Meta to pause its efforts, citing concerns over the processing of personal data without explicit user consent. Originally scheduled to implement changes on June 26, Meta’s strategy involved using the legal basis of ‘Legitimate Interests’ to train its AI models, which would allow the company to utilize both first and third-party data for AI development without requiring users’ explicit consent.
Stefano Fratta, Meta’s global engagement director for privacy policy, expressed disappointment over the delay, stating that Meta’s approach adheres to European laws and regulations. He emphasized Meta’s transparency in comparison to industry peers regarding AI training practices. However, the postponement underscores the complexities and regulatory scrutiny surrounding data privacy in Europe, particularly regarding the use of personal data for advancing AI technologies.
The decision impacts Meta’s ability to introduce advanced AI tools and innovations in Europe, where it seeks to leverage local data to train AI models effectively. This includes capturing diverse languages, cultural references, and geographic nuances crucial for improving user experiences and expanding AI capabilities across the region. Meta’s delay follows complaints filed by noyb (none of your business), an Austrian privacy advocacy group, across multiple European countries alleging violations of the General Data Protection Regulation (GDPR). The organization criticizes Meta’s approach, arguing that GDPR mandates informed opt-in consent for processing personal data, especially for AI development.
Max Schrems, founder of noyb, condemned Meta’s practices, accusing the company of potentially circumventing GDPR protections by using data for unspecified AI purposes without obtaining explicit consent from users. He emphasized the importance of ensuring that user privacy rights are upheld from the outset of AI development initiatives. As Meta continues discussions with regulatory bodies and works to address concerns raised by the DPC and other data protection authorities, the outcome will likely influence future standards for AI development and data privacy protections in Europe.