And now the Bard news.
Bard, Alphabet’s (GOOGL) conversational artificial intelligence (AI) technology, has been having a few rough months.
DON'T MISS: Adobe Sees AI Demand Driving Revenue Gains After Q2 Earnings Beat; Shares Jump
It started in February, when the bot made a factual error during its first demo.
In April, former Alphabet employees from the company’s AI and ethics initiatives spoke about Google’s desire to keep pace with Microsoft's (MSFT) ChatGPT at the expense of Bard’s quality.
Microsoft is the main investor in OpenAI, which launched ChatGPT on Nov.30.
Current and former Google employees said that the people working on the company's ethics team are "disempowered and demoralized." Another former manager at Google said that “AI ethics has taken a back seat” to the company’s need to keep up with its major competitor.
The employees said that "they’re concerned that the speed of development is not allowing enough time to study potential harms."
Cautioning Employees
On June 13, the European Union’s main data regulator said Google would have to postpone starting Bard in the EU due to privacy concerns, Politico reported.
The Irish Data Protection Commission said that the tech giant had so far provided insufficient information about how its generative AI tool protects Europeans' privacy to justify an EU launch.
Deputy Commissioner Graham Doyle said the commission "had not had any detailed briefing nor sight of a data protection impact assessment or any supporting documentation at this point."
Google said it has had detailed conversations with the commission and is addressing regulators' questions.
And on June 16, Reuters reported that Alphabet is cautioning employees about how they use chatbots, including its Bard, even as it markets the program around the world.
The tech giant advised employees not to enter its confidential materials into AI chatbots, Reuters said, citing people familiar with the matter, and the company confirmed, noting long-standing policy on safeguarding information.
Setting Up Guardrails
Chatbots, such as Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts.
Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.
Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate.
The company did not immediately respond to a request for comment, but told Reuters that Bard can make undesired code suggestions, but it helps programmers nonetheless.
Google's parent company also said it aimed to be transparent about the limitations of its technology.
A growing number of businesses have set up guardrails on AI chatbots, including Amazon (AMZN), Samsung (SSNLF) and Deutsche Bank.
Some 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top U.S.-based companies, done by the networking site Fishbowl.