ChatGPT, a powerful artificial intelligence tool owned by OpenAI, has been temporarily banned by Italy’s data protection agency due to concerns over privacy and safety. The agency alleges that the chatbot has been illegally collecting user data and failing to protect minors, as it lacks age verification and doesn’t notify users of data collection.
Failure to comply with the European Union’s General Data Protection Regulation could result in a fine of up to €20 million for OpenAI. The company has 20 days to address the allegations and either remedy or justify them.
The Italian government’s announcement comes on the heels of a letter endorsed by over 1,000 technology leaders calling for a temporary moratorium on the development of artificial intelligence beyond GPT-4, the most recent iteration of OpenAI’s chatbot.
The letter, published by the nonprofit Future of Life Institute, highlights concerns over an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The letter suggests that in the absence of a voluntary pause, government regulators should intervene.
This development raises the possibility that other countries in the European Union may also crack down on the program, further impacting OpenAI’s operations. The Italian government’s concerns over privacy and safety violations have been a long-standing issue with AI-powered applications, and regulators are becoming increasingly vigilant to ensure that user data is protected.
It remains to be seen how OpenAI will respond to these allegations and whether the company will be able to continue operating ChatGPT in Italy and other countries in the EU.