Italy’s data protection authority, the Garante, has imposed a hefty €15 million ($15.66 million) fine on OpenAI, the creator of ChatGPT, citing violations of the European Union's General Data Protection Regulation (GDPR). The penalty highlights concerns over how the generative AI tool handles personal data.
The GDPR Violations
The Garante's decision stems from an investigation launched nearly a year ago. It found that OpenAI had processed user data to train ChatGPT without obtaining an adequate legal basis, breaching GDPR requirements.
The regulator further criticized OpenAI for failing to:
- Notify the Garante of a security breach in March 2023.
- Adhere to the principles of transparency and user information obligations.
- Implement proper age verification mechanisms, exposing children under 13 to potentially inappropriate AI-generated responses.
Mandated Actions
Beyond the €15 million fine, OpenAI has been ordered to conduct a six-month-long public awareness campaign across various media platforms. The campaign must inform the public—both users and non-users—about:
- The nature of data collected by ChatGPT.
- The rights users have under GDPR, including the ability to object, rectify, or delete their data.
- How to prevent their personal data from being used to train generative AI models.
The Garante emphasized the importance of empowering individuals to exercise their data privacy rights effectively.
ChatGPT’s Temporary Ban and OpenAI’s Response
Italy became the first country to temporarily ban ChatGPT in March 2023 due to data protection concerns. However, OpenAI resolved the initial issues, leading to the ban being lifted in April.
In response to the recent fine, OpenAI described the decision as "disproportionate" and confirmed plans to appeal. The company argued that the fine was nearly 20 times its revenue in Italy for the period in question. OpenAI reiterated its commitment to developing AI tools that respect users’ privacy rights.
EDPB’s Stance on AI and GDPR
The ruling aligns with the European Data Protection Board's (EDPB) opinion on AI compliance with GDPR. The EDPB clarified that:
- An AI model initially trained on unlawfully processed personal data but later anonymized for deployment would not necessarily violate GDPR.
- However, any subsequent processing of personal data during the model's operation would still be subject to GDPR rules.
The Board recently issued guidelines on data transfers outside the EU, emphasizing compliance with GDPR. These guidelines are open for public consultation until January 27, 2025.
Key Takeaways
OpenAI’s case serves as a reminder of the significant challenges AI companies face in navigating complex data privacy regulations like GDPR. As regulatory scrutiny intensifies, companies must prioritize transparency, user rights, and data protection to build trust in generative AI technologies.