Italy has temporarily banned artificial intelligence software ChatGPT over concerns for citizens' privacy in what's been billed a first for any Western country.
The country's data protection watchdog banned the site's creator OpenAI, which is backed by Microsoft, "with immediate effect".
ChatGPT has been used by millions worldwide since its launch in November 2022, generating human-like language to answer any questions.
Concerns have been raised over the threats posed to jobs and the potential for misinformation since its launch, while many schools and universities have banned its use on their networks over plagiarism fears.
Microsoft has already invested millions of dollars in ChatGPT, which it claims it hopes to integrate into its Office programmes such as Word, Excel and Powerpoint.
Italy's data protection authority said they were investigating a possible violation of stringent European Union data protection rules as they placed a block on anyone accessing the software on Friday.
The watchdog said OpenAI must report "within 20 days" what measures it has taken to ensure the privacy of users' data or face a fine of up to either 20 million euros (nearly £18m) or 4% of annual global revenue.
Citing the EU's General Data Protection Regulation, the authority said ChatGPT had suffered a data breach on March 20 involving "users' conversations" and information about subscriber payments.
OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users' chat history.
"Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user," the company said.
"We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted."
Italy's privacy watchdog lamented the lack of a legal basis to justify OpenAI's "massive collection and processing of personal data" used to train the platform's algorithms and that the company does not notify users whose data it collects.
The agency also said ChatGPT can sometimes generate - and store - false information about individuals.
Finally, it noted there is no system to verify users' ages, exposing children to responses "absolutely inappropriate to their age and awareness".
The watchdog's move came as scientists and tech industry leaders published a letter on Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the autumn to give time for society to weigh the risks.
Billionaire and Twitter owner Elon Musk is among the growing number of huge tech names calling for brakes on its development.
Nello Cristianini, an AI professor at the University of Bath, said: "While it is not clear how enforceable these decisions will be, the very fact that there seems to be a mismatch between the technological reality on the ground and the legal frameworks of Europe."
San Francisco-based OpenAI's CEO, Sam Altman, announced plans this week for a six-continent trip in May to talk about the technology with users and developers.
That includes a stop planned for Brussels, where European Union legislators have been negotiating sweeping new rules to limit high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.
European consumer group BEUC said it could be years before the EU's AI legislation takes effect, calling for the bloc to investigate the future impact.
"In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning," deputy director general Ursula Pachl said.
Waiting for the EU's AI Act "is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people".