ChatGPT: Lessons From Italy’s Temporary Ban of the AI Chatbot

Image: Unsplash

ChatGPT: Lessons From Italy’s Temporary Ban of the AI Chatbot

In March 2023, Italy became the first western country to block the advanced artificial intelligence (“AI”)  chatbot known as ChatGPT. The Italian data protection authority, Garante, cited concerns over the protection of personal ...

April 22, 2023 - By Oreste Pollicino, Giovanni De Gregorio

ChatGPT: Lessons From Italy’s Temporary Ban of the AI Chatbot

Image : Unsplash

Case Documentation

ChatGPT: Lessons From Italy’s Temporary Ban of the AI Chatbot

In March 2023, Italy became the first western country to block the advanced artificial intelligence (“AI”)  chatbot known as ChatGPT. The Italian data protection authority, Garante, cited concerns over the protection of personal data when making this decision, including ChatGPT’s collection of data in a way that is incompatible with data protection law. Another reason given was the lack of age verification by the platform, which could expose children to harmful content. As a result, it used an emergency procedure to temporarily suspend the processing of personal data by OpenAI, and has given OpenAI, the California-based company that created ChatGPT, until the end of April to comply with its demands.

News about the temporary ban spread across the worldraising concerns about the consequences of decisions like this on the development of new AI applications. The move also coincided with a call by experts and business people to place limits on the development of AI-based applications until the risks could be better assessed. The temporary ban could offer some important lessons about the proportionality and effectiveness of bans on developing technologies, about coordination between member states at the European level, and how to balance access to services with the need to protect children from accessing harmful content.

The order – which was issued on March 30 – was signed by Pasquale Stanzione, the president of the data protection authority in Italy, and followed a notification about a data breach concerning ChatGPT user data that had been reported ten days earlier.

Data Processing

Garante briefly justified its measures by underlining the lack of information available to users, and data subjects, about the data processed by OpenAI. It also cited the large-scale processing of personal data to train generative systems such as ChatGPT. OpenAI’s terms state that ChatGPT is provided only to users aged over 13. However, this did not satisfy Italy’s data protection authority, which was concerned about the lack of age verification. The reaction by OpenAI was, first, to block access to ChatGPT in Italy and, second, to demonstrate its availability to collaborate with Garante on complying with the temporary order.

Compliance would involve OpenAI implementing safeguards including the provision of a privacy policy, offering users the possibility of exercising individual rights over data protection, and providing information about the company’s legal basis for processing personal data. Garante welcomed these commitments. It suspended the temporary order and requested that OpenAI implement these safeguards by the end of April 2023.

Harmonized Framework

However, the case highlights some key lessons – namely, the lack of European coordination in regulating this technology, the effectiveness and proportionality of this measure, and the protection of children. 

First, more European coordination is needed around the general issue of AI technology. The EU’s proposed AI Act is only one step towards a harmonized framework for ensuring the development of AI technologies that are aligned with European values. And as Italy’s ban has shown, the EU regulatory model can potentially become fragmented if national authorities go in their own directions. In particular, the connection between AI and data protection empowers national authorities to react to the development of new AI technology. It also underlines the need for more coordination between European member states on regulation of all kinds. 

Planning, Not Banning

Second, the measures adopted by the Italian data protection authority raise questions both about effectiveness and proportionality. Regarding effectiveness, it is worth noting that there were reports of a 400 percent surge in VPN downloads in Italy, potentially enabling users to get round the ban, following news of its introduction.

On the question of proportionality, a general ban does not seem to strike a balance between the conflicting constitutional interests at stake. The temporary measure in Italy does not mention how it takes into account the protection of other interests, such as the freedom of users to access ChatGPT. Even though the ban is temporary, the situation might have benefited from the involvement at an earlier stage by other board members of the data protection authority in Italy. A preliminary exchange with OpenAI could have avoided a ban altogether. This course of action could have anticipated the implementation of further safeguards to comply with data protection.

Finally, the decision raises questions about the best ways to protect children from any harmful content created by these applications. Introducing an age verification system or alerts regarding harmful content could have been topics for discussion, had the parties been engaged in an ongoing dialogue. This case offers an example of how general bans imposed on new technological applications are usually the result of quick reactions that do not involve a deep assessment of the effectiveness and proportionality of the measure. 

Even if one argues that the decision tends towards protecting fundamental rights, primarily in data protection and safeguards for children, it leads to more uncertainty. A preventative and collaborative approach with OpenAI would have minimized the risk of this service being blocked in Italy. Continued discussion between OpenAI and Italy’s authorities is critical.


Oreste Pollicino is a Professor of Constitutional Law at Bocconi University. 

Giovanni De Gregorio is the PLMJ chair in law and technology at Católica Global School of Law and Católica Lisbon School of Law. 

related articles