ChatGPT, the best-known relational artificial intelligence software capable of simulating and processing human conversations, suffered a data loss (data breach) last March 20 regarding user conversations and subscriber payment information. Despite the timely intervention of technicians from OpenAI-the U.S.-based company that developed and operates the platform-to fix the problem and limit the damage, a stop came from the Italian Privacy Authority, ordering with immediate effect the temporary restriction of the processing of Italian users' data against OpenAI. The Authority has given the U.S. company until April 30, 2023, to comply with the requirements imposed on its conversational chatbot, simultaneously launching an investigation. An agreement would allow the litigation to be closed.
The Guarantor has asked OpenAI to provide and place in a prominent location for users of the site to access ChatGPT, a transparent disclosure to explain how the data necessary to train the algorithm is processed, how the programming interface works, and what users' rights are.
A note from the guarantor reads, "For users who connect from Italy, the disclosure must be presented prior to the completion of registration and, again prior to the completion of registration, they must be asked to declare that they are of legal age."
By personal data, in the case of ChatGPT, we mean all the data we give up, even unintentionally, when we query it.
The Authority has also called for a plan by May 31 to implement an age verification system to prohibit access to children under 13 who do not have parental consent. Finally, communication campaigns on radio, tv and internet will have to inform people about how algorithms use personal data. OpenAI will have to inform "people that their personal data are likely to be collected for the purpose of algorithm training" and that there are up-to-date disclosures on the site and tools to object to the use of the data