Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern UK technology editor

Bard: how Google’s chatbot gave me a comedy of errors

Phone screen displaying 'Bard' and the Google logo.
Bard, Google’s AI chatbot, launched in the US and UK on Tuesday. Photograph: SOPA Images/LightRocket/Getty Images

In June 2022, the Google engineer Blake Lemoine was suspended from his job after he spoke out about his belief that the company’s LaMDA chatbot was sentient.

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” Lemoine said in a parting email to colleagues. Now, six months on, the chatbot that he risked his career to free has been released to the public in the form of Bard, Google’s answer to OpenAI’s ChatGPT and Microsoft’s Bing Chat.

While Bard is built on top of LaMDA, it’s not exactly the same. Google says it has worked hard to ensure that Bard does not repeat the flaws of earlier systems. That means avoiding “hallucinations”, where it makes up facts to avoid admitting it doesn’t know an answer, and ensuring “alignment”, keeping the conversation from veering off into disturbing or alarming tangents.

After a day of using Bard to answer queries, have conversations and even play games, one thing is clear: if Lemoine had been stuck with this, he’d still be employed.

In its rush to ensure that it doesn’t repeat the mistakes of its predecessors, Google has created a system that prefers to speak in anodyne, unhelpful cliche rather than engage in specifics. Ask it for a list of holiday ideas, and it will offer only the most generic possible options; try to prompt for more interesting fare, and it seems to get hopelessly muddled by the increasing constraints, forgetting earlier requirements.

This might have been an acceptable trade-off if the cliche were at least accurate, but Bard seems just as willing to hallucinate as its peers the second it ends up in unfamiliar territory. To offer just one example conversation: I asked it for tips about travelling in Japan with my daughter, who has Down’s syndrome. It initially offered generic advice for travelling with a disabled child – heavy on tips referring to wheelchair accessibility – and when I pushed it for specifics, it warned me that as Britons, we would need to apply for a visa to travel there. (Not true.)

I tried to change tack, and asked it for advice on eating out in Japan with two young children. A generic response on eating out anywhere with children was concluded with advice to “be sure to tip your server. Tipping is not customary in Japan, but it is always appreciated”. (Not true; it is seen as actively rude.)

One more attempt, phrasing the question negatively, saw the chatbot draw up a list of places in Tokyo that weren’t appropriate for children, including “shrines” (not true) and “places like construction sites” (true!).

Unlike ChatGPT, Bard is hooked up to the live internet and can pull answers in from other sites when needed. For simple queries – the sort that may be easy for Google anyway – that works well: it’s capable of telling me the result of West Ham’s most recent game, whereas OpenAI’s bot is forced to simply admit it doesn’t know recent data.

But for more complex questions, that capability is less useful than it seems. My friend Dan just published his first book, and ChatGPT can’t tell me anything about it, but Bard will happily summarise the reviews (“mixed … praised for its timely and important message”) and give specific quotes from the New York Times (“a passionate and well-researched argument for why cars are making our lives worse”). Unfortunately, it made it all up: the quotes are fake, and the reviews don’t exist. But even a canny user could be caught out, since Bard can search for real reviews and quote them accurately – it just doesn’t want to.

I even tried playing a game with it, called Liar Liar: I tell it about myself, and it tries to guess if I’m telling it the truth. I explain the rules in detail, and it tells me to go ahead, so I tell it my name is Alex and I’m a standup comedian. It immediately gets confused, and introduces itself in turn. “Nice to meet you, Alex. I am Bard, a large language model from Google AI.”

I correct it, remind it we’re playing a game, and again tell it my name is Alex, and that I’m a standup comedian. “Liar, liar,” it promptly shouts. “You are a large language model from Google AI. You can’t be a standup comedian.”

It may not be a standup comedian either, but at least it got a laugh from me.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.