The Italian government’s privacy watchdog said Friday that it is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach.
In a statement on its website, the Italian Data Protection Authority described its action as provisional “until ChatGPT respects privacy.” The watchdog’s measure involves temporarily limiting the company from holding Italian users' data.
U.S.-based OpenAI, which developed ChatGPT, didn’t immediately return a request for comment Friday.
While some public schools and universities around the world have blocked the ChatGPT website from their local networks over student plagiarism concerns, it’s not clear how Italy would block it at a nationwide level.
The move also is unlikely to affect applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.
The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.
The Italian watchdog said OpenAI must report to it within 20 days what measures it has taken to ensure the privacy of users' data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.
The agency's statement noted that ChatGPT faced a loss of data on March 20 “regarding the conversations of users and information related to the payment of the subscribers for the service.”
OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject line, of other users’ chat history.
“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”
Italy's privacy watchdog lamented “the lack of a notice to users and to all those involved whose data is gathered by OpenAI” and “above all, the absence of a juridical basis that justified the massive gathering and keeping of personal data, with the aim of ‘training’ algorithms underlying the functioning of the platform.”
The agency said information supplied by ChatGPT “doesn't always correspond to real data, thus determining the keeping of inexact personal data.”
Finally, it noted “the absence of any kind of filter to verify the age of the users, exposing minors to answers absolutely unsuitable to their degree of development and self-awareness.”
A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.
The San Francisco-based company’s CEO, Sam Altman, announced this week that he’s embarking on a six-continent trip in May to talk about the technology with users and developers. That includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools.
Altman said his stops in Europe would include Madrid, Munich, London and Paris.