A complaint recently lodged by the privacy advocacy group Noyb with the Austrian data protection authority has brought into question the practices of ChatGPT, an AI language model developed by OpenAI. At the heart of the matter are allegations that ChatGPT’s operations breach provisions of the General Data Protection Regulation (GDPR), particularly concerning privacy, data accuracy, and the right to rectify inaccurate information. Noyb contends that ChatGPT’s failure to deliver accurate responses results in a violation of GDPR mandates, which require stringent handling and accuracy of personal data.
Central to the complaint is the assertion that OpenAI, the entity behind ChatGPT, has not adequately addressed concerns regarding the accuracy of the information generated by the AI model. Despite requests, OpenAI allegedly refuses to correct or delete erroneous responses and has been less than transparent about its data processing practices, including the sources and recipients of the data. This lack of transparency and accountability is a significant concern for privacy advocates, who stress the importance of AI technologies adhering to legal requirements and upholding user rights.
Luiza Jarovsky, CEO of Implement Privacy, provides insight into the broader context of AI-based language models and their approach to privacy. She characterizes it as a “privacy by pressure” paradigm, wherein actions are often taken only in response to public outcry or legal mandates. Jarovsky cites a specific incident involving ChatGPT, where users’ chat histories were inadvertently exposed, highlighting the potential consequences of inaccurate information dissemination. Even seemingly innocuous inaccuracies can have far-reaching implications, leading to reputational harm and privacy breaches.
In response to these concerns, the complaint calls for a thorough investigation into OpenAI’s data handling practices to ensure compliance with GDPR regulations. Additionally, Noyb seeks penalties for any violations found, underscoring the necessity of safeguarding individuals’ privacy rights in the rapidly evolving landscape of AI technologies. As AI continues to play an increasingly significant role in society, ensuring accountability and transparency in its operations becomes paramount to protecting user privacy and data integrity.