ChatGPT, the best-known relational artificial intelligence software capable of simulating and processing human conversations, suffered a data loss (data breach) last March 20 regarding user conversations and subscriber payment information. Despite the timely intervention of technicians from OpenAI-the U.S.-based company that developed and operates the platform-to fix the problem and limit the damage, a stop came from the Italian Privacy Authority, ordering with immediate effect the temporary restriction of the processing of Italian users' data against OpenAI. The Authority has given the U.S. company until April 30, 2023, to comply with the requirements imposed on its conversational chatbot, simultaneously launching an investigation. An agreement would allow the litigation to be closed.

The Guarantor has asked OpenAI to provide and place in a prominent location for users of the site to access ChatGPT, a transparent disclosure to explain how the data necessary to train the algorithm is processed, how the programming interface works, and what users' rights are.

A note from the guarantor reads, "For users who connect from Italy, the disclosure must be presented prior to the completion of registration and, again prior to the completion of registration, they must be asked to declare that they are of legal age."

By personal data, in the case of ChatGPT, we mean all the data we give up, even unintentionally, when we query it.

The Authority has also called for a plan by May 31 to implement an age verification system to prohibit access to children under 13 who do not have parental consent. Finally, communication campaigns on radio, tv and internet will have to inform people about how algorithms use personal data. OpenAI will have to inform "people that their personal data are likely to be collected for the purpose of algorithm training" and that there are up-to-date disclosures on the site and tools to object to the use of the data​

The Guarantor has asked OpenAI to provide and place in a prominent location for users of the ChatGPT access site a transparent disclosure. Artificial intelligence could also cause very significant consequences for insurance companies

The evolving situation also needs to be closely watched by insurance companies (click here to learn more about ChatGPT's possible effects on policies). According to a study by the European Insurance and Occupational pensions Authority, tools such as artificial intelligence or machine learning (ML) are already being used by 31% of insurance companies, while another 24% are at a proof-of-concept stage in using these IT systems.

Insurance companies could optimize the use of artificial intelligence in advice, assistance and selection of insurance products, premium customization, claims management and fraud prevention.

The benefits in using AI are many, and it is not science fiction to think that the evolution of ChatGPT could achieve greater efficiency, reduced costs, and reduced operational risks.