AI Privacy

Italian DPA: ChatGPT illegally collects personal data and no age verification

The Italian Data Protection Authority has ordered, with immediate effect, the temporary restriction of the processing of Italian users’ data
against OpenAI, the U.S.-based company that developed and operates the platform. At the same time, the Authority opened an investigation.
ChatGPT, the best-known relational artificial intelligence software capable of simulating and processing human conversations, had suffered a data leak (data breach) last March 20 regarding
user conversations and payment information of paid service subscribers. In the order, the DPA notes the lack of a disclosure to users and all stakeholders whose data is collected by OpenAI, but more importantly, the absence of a legal basis to justify the massive collection and storage of personal data for the purpose of “training” the algorithms underlying the operation of the platform.
Moreover, as evidenced by the audits conducted, the information provided by ChatGPT does not always correspond to the actual data, thus leading to inaccurate processing of personal data.
Finally, despite the fact that-according to the terms published by OpenAI-the service is aimed at those over the age of 13, the Authority points out that the absence of any filter for verifying the age of users exposes minors to responses that are totally unsuitable with respect to their level of development and self-awareness.
OpenAI, which does not have an office in the Union but has designated a representative in the European Economic Area, must report within 20 days the measures taken to implement the Garante’s request, under penalty of a fine of up to 20 million euros or up to 4 percent of annual
global turnover.