ChatGPT has been sidelined in Italy since late March, after some internal privacy and data management issues raised red flags for both country and European Union regulators.
One month later, OpenAI, which owns and operates ChatGPT, has apparently resolved the issues, and the artificial intelligence communications technology is back in Italy’s good graces.
DON’T MISS: These Workers Claim They Secretly Use ChatGPT to Do 80% of Their Jobs
“ChatGPT is available again to our users in Italy,” The San Francisco-based technology firm announced via email. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”
EU officials seem to be on board. In a new Instagram post, Italian Infrastructure Minister Matteo Salvini wrote the country’s government backs OpenAI and “is committed to helping start-ups and development in Italy.”
OpenAI had found itself undergoing regulatory scrutiny from the EU and Italy over data privacy concerns.
“Last month, the Italian watchdog, known as Garante, ordered OpenAI to temporarily stop processing Italian users’ personal information while it investigated a possible data breach,” the Associated Press reported. “The authority said it didn’t want to hamper AI’s development but emphasized the importance of following the EU’s strict data privacy rules.”
Garante had issued an April 30 deadline for OpenAI to comply with the requests, which the ChatGPT developer cutting it close with the April 28 announcement.
The changes made by OpenAI include a new form European Union users can leverage to remove sensitive personal data from the chatbot, based on protections rendered via the EU’s General Data Protection Regulation (GDPR) regulation, which passed muster in May 2018.
Changes also include a user age verification tool before signing on to the ChatGPT platform and new provisions in OpenAI’s European charter that streamline how the company can collect and use personal information.
OpenAI may be out of the woods in Italy, but other countries are keeping a sharp eye on how ChatGPT collects user data, especially on “data used to train ChatGPT’s algorithms . . . that the system could sometimes use to generate false information about individuals,” the AP reported.
France, Canada, and the U.S. are either launching data privacy investigations on ChatGPT or are considering doing so. Recently, U.S. Federal Trade Commission director Lina Khan said Uncle Sam will “not hesitate to crack down” on how companies deploy AI in their information-gathering practices.