Ascannio - stock.adobe.com

Italy to lift ChatGPT ban subject to new data protection controls

Italian regulator will lift its ban on OpenAI’s ChatGPT subject to a strict new data protection regime

Italy’s privacy and data protection regulator will lift its recently imposed ban on OpenAI’s ChatGPT service at the end of April 2023 if the service implements a series of measures to address the regulator’s concerns.

The Garante per la Protezione dei Dati Personali (GPDP) ordered Microsoft-backed OpenAI to cease offering its service in Italy at the end of March, saying there was no way for ChatGPT to process data without breaching privacy laws, and no legal basis underpinning its collection and processing of data for training purposes.

It also cited the lack of an age verification mechanism, which means children under 13 could be exposed to inappropriate responses from prompts entered into ChatGPT.

The GPDP has now imposed a number of conditions on OpenAI that it believes will satisfy its concerns about the security of the ChatGPT offering.

“OpenAI will have to comply by 30 April with the measures set out by the Italian SA concerning transparency, the right of data subjects – including users and non-users – and the legal basis of the processing for algorithmic training relying on users’ data,” the regulator said in a statement.

“Only in that case will the Italian SA lift its order that placed a temporary limitation on the processing of Italian users’ data, there being no longer the urgency underpinning the order, so that ChatGPT will be available once again from Italy.”

OpenAI must now draft and make available online a notice describing the “arrangements and logic” of the data processing needed to run ChatGPT, and the rights afforded to data subjects, both users and non-users.

Users signing up in Italy will have to be presented with this notice and declare they are over the age of 18, or have obtained parental consent if aged 13 to 18, before being permitted to use ChatGPT. The age-gating mechanism must be implemented by 30 September 2023.

OpenAI has also been ordered to remove all references to contractual performance and rely – in line with accountability principles in the European Union (EU) GDPR – on either consent or legitimate basis as the applicable legal basis for the processing of personal data for training algorithms.

It must apply a set of measures to enable data subjects to erase or rectify their personal data if used incorrectly by ChatGPT, and enable non-users to exercise their right to object to the processing of personal data – even if legitimate interest is chosen as the legal basis for processing it.

Finally, OpenAI has been instructed to run a public awareness campaign in the Italian media to inform individuals about the use of their data for training algorithms.

“The Italian SA will carry on its inquiries to establish possible infringements of the legislation in force, and may decide to take additional or different measures if this proves necessary upon completion of the fact-finding exercise underway,” said the GPDP.

Ilia Kolochenko, founder of Immuniweb and a member of Europol’s data protection experts network, commented: “Privacy issues are just a small fraction of regulatory troubles that generative AI, such as ChatGPT, may face in the near future. Many countries are actively working on new legislation for all kinds of AI technologies, aiming at ensuring non-discrimination, explainability, transparency and fairness – whatever these inspiring words may mean in a specific context, such as healthcare, insurance or employment.

“Of note, the regulatory trend is not a prerogative of European regulators. For example, in the United States, the FTC is poised to actively shape the future of AI.”

Training data

Kolochenko said that clearly one of the biggest issues with generative AI is training data, which is all too regularly scraped up by AI suppliers without any permission from content creators or individuals.

He warned that while current intellectual property (IP) law would seem to provide little to no protection against this, large-scale data scraping practices do likely violate the terms of service of digital resources, which may eventually lead to litigation.

This said, added Kolochenko, banning AI was still not a great idea. “While law-abiding companies will submissively follow the ban, hostile nation-state and threat actors will readily continue their research and development, gaining unfair advantage in the global AI race,” he said.

Read more about ChatGPT

Read more on Privacy and data protection

CIO
Security
Networking
Data Center
Data Management
Close