A new exposé has pulled back the curtain on some of Google Bard’s background function -- and it doesn’t look good.
Former Alphabet (GOOG) employees from the company’s artificial intelligence (AI) and ethics initiatives spoke with Bloomberg about Google’s desire to keep pace with ChatGPT at the expense of Bard’s quality.
DON’T MISS: Here's How to Get Cash From Facebook's $725 Million Class Action Settlement
The company’s AI chatbot Bard was released to the public in March as an “experiment,” despite reports that the bot is more likely to produce mistakes than its competitor, Microsoft (MSFT) -backed ChatGPT. Now more former and current Google employees have come forward to shine a light on the company’s AI initiative.
The ethics team at Google is responsible for identifying potentially harmful AI output such as misinformation or responses reflecting human bias. Current and former employees told Bloomberg that the people working on Google's ethics team are "disempowered and demoralized." Another former manager at Google said that “AI ethics has taken a back seat” to the company’s need to keep up with its major competitor.
When questioned by Bloomberg, spokesperson Brian Gabriel said that Google is "continuing to invest in the teams that work on applying our AI Principles to our technology.” But a round of layoffs in January cut at least three jobs from the team working on responsible AI.
The company has been running Bard as an "experiment" open to the public since March--reportedly overruling employees' ethical concerns with the product along the way. As Google looks to integrate Bard into its full suite of products, employees told Bloomberg that "they’re concerned that the speed of development is not allowing enough time to study potential harms."