The UK Information Commissioners’ Office (ICO) has initiated an inquiry into the privacy implications of utilizing scrapped data for training generative artificial intelligence (AI) algorithms. The ICO is soliciting input from AI developers, legal experts, and industry stakeholders to assess the potential violations of privacy rights in the use of AI systems that process data scraped from the public internet. The major concern revolves around the inclusion of personally identifiable information, such as names and contact details, in the training data. The ICO’s consultation aims to determine if current data processing practices by AI developers comply with the UK General Data Protection Regulation (GDPR) and the Data Protection Act of 2018.
The consultation specifically focuses on evaluating whether AI developers meet the “lawfulness” clause under the UK GDPR, which outlines six measures for compliance. These measures include obtaining user consent and ensuring that businesses represent the legitimate interests of their customers. The ICO emphasizes the importance of AI developers taking their legal obligations seriously and effectively considering legitimate interests when training generative AI models on web-scraped data. The consultation period is set to close on March 1, with the ICO planning to release guidance on AI based on the responses received.
The lack of comprehensive artificial intelligence regulation in the United Kingdom has led the ICO to take a proactive approach to monitor AI within its jurisdiction. While there is no dedicated AI regulation, the ICO, along with other regulators, has been tasked by the British government to oversee data and privacy aspects of AI, reflecting the growing recognition of the need for regulatory oversight in the AI domain.