Snapchat has revised its artificial intelligence (AI) privacy policy after an investigation by the UK’s Information Commissioner’s Office (ICO) determined the company violated user privacy rights. The probe revealed that Snapchat’s generative AI-powered chatbot, My AI, posed privacy risks, particularly to children, which were not adequately assessed by the company. As a result, the ICO found that Snapchat failed to comply with data protection regulations.
On Tuesday, the ICO announced that Snapchat has now brought its AI privacy measures into compliance with UK data protection laws. The regulator emphasized the importance of companies conducting thorough data risk assessments before launching AI products to ensure user protection. Stephen Almond, the ICO’s executive director of regulatory risk, stated that the ICO would continue to monitor such assessments and use its enforcement powers to safeguard public privacy.
The decision comes amid ongoing ICO efforts to address AI-related privacy concerns, including a push to fine Clearview AI after a tribunal overturned a previous penalty. Although the UK lacks binding AI-specific regulations, the ICO’s actions align with the British government’s strategy to monitor AI using existing regulatory frameworks.
In its broader efforts to mitigate AI privacy risks, the ICO recently launched consultations on the relationship between AI model purposes and accuracy, and the legality of processing personal identifiable information from public datasets. These initiatives are part of the regulator’s strategy to ensure AI technologies do not infringe on individual privacy rights.