OpenAI is facing scrutiny from the Italian data protection authority, Garante, following an investigation that concluded the company violated European privacy laws. The Italian regulator had imposed a temporary ban on OpenAI’s large language model chatbot in 2023 for alleged violations of the European General Data Protection Regulation (GDPR). Although in-country access was later restored after OpenAI implemented changes, including age verification and an opt-out form, the Italian agency now claims that the company continues to breach privacy laws. OpenAI has 30 days to respond to the findings, and this development is part of broader European scrutiny, with regulators in Germany, France, Spain, and Poland also examining the company’s privacy practices.
The Italian regulator’s announcement follows OpenAI’s introduction of an updated privacy policy in December 2023 in response to increased European scrutiny. The revised policy provides information on the data collected by OpenAI and how it is processed. Users can now object to the processing of their data for direct marketing or legitimate interest. However, OpenAI notes that these rights may be limited. The company has not yet responded to requests for comment on the Italian investigation. Additionally, OpenAI faces ongoing privacy scrutiny from German, French, Spanish, and Polish data regulators.
The European Union is gearing up to implement a comprehensive regulation on artificial intelligence known as the AI Act. This legislation prohibits high-risk AI systems, including activities like emotion recognition and facial data scraping from CCTV. The Italian data protection authority’s actions highlight the evolving regulatory landscape surrounding AI and privacy in Europe, with authorities keen on ensuring compliance with GDPR and other relevant laws.
Reference: