Liechtenstein’s data protection regulator, Datenschutzstelle, has introduced new data processing guidelines specifically targeting large language model-powered chatbots, including systems like ChatGPT.
Furthermore, these guidelines are developed as European data watchdogs grapple with the challenge of regulating technology that utilizes extensive data, including sensitive user information, for training AI algorithms. The guidance focuses on monitoring how AI chatbots handle user data, cookies, and queries, particularly sensitive information like healthcare data.
Additionally, to ensure compliance, the primary legal basis cited is the consent and transparency clauses outlined in the General Data Protection Regulation (GDPR). Some scenarios might necessitate additional compliance with privacy regulations. This directive introduces a data governance framework where companies must obtain user consent before processing data or demonstrate implicit consent during user interactions with the application.
In cases where data processing extends beyond query response, such as creating advertising profiles, separate consent in line with Article 6 Paragraph 1 of the GDPR is mandated.
These guidelines arrive as the European Union is finalizing its AI Act, while ChatGPT is currently under investigation for potential GDPR violations in countries like Spain, France, Germany, and Poland. The EU is also establishing a European AI Office to enforce unified AI regulations and support the development of secure AI models within the private sector, emphasizing the increasing importance of AI governance and data protection in the digital landscape.