Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

Google doubles down on AI Overviews after telling users to eat rocks and put glue on pizza

Google AI logo on phone.

Google has doubled down on its AI-powered search results, saying the recent spate of weird and unhelpful responses are limited to just a handful of niche queries.

AI Overviews were announced as part of Google I/O earlier this month and put longer-form Gemini-powered text responses above results for more complex queries. These are questions that would require several website visits to answer.

Soon after launch a number of issues started to pop up including one suggesting the use of nontoxic glue to thicken up pizza sauce, another saying eating rocks can be good for you and a third suggesting smoking while pregnant is healthy — it is not. If you don't want AI Overviews we have a guide to blocking them from results.

In a statement, Liz Reid, Google's head of search said many of the more outlandish claims on social media couldn't be reproduced — including the rocks question — and of the real queries she said: "there are areas we need to improve."

What went wrong with Google AI Overviews?

(Image credit: Google)

Reid says the biggest problem with many of the results came down to how the AI model interpreted sarcasm and humor. "We saw AI Overviews that featured sarcastic or troll-y content from discussion forums," she wrote. 

"Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza." Which could be a problem for OpenAI's deal with Reddit.

Then you had the people running with this as a joke, further adding to the sarcastic and troll-y content on discussion forums. 

There is also an element of the web stress testing the unexpected behaviors for Google. Reid says: "There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results."

What's the solution to the AI Overviews problem?

(Image credit: Google)

Generative AI has no way of knowing what’s true, it only knows what’s popular. So, it often ends up surfacing answers from untrustworthy sources or parody accounts instead of the actual facts. In some cases, AI is also prone to “hallucination," where it just makes up false data to cover up its knowledge gap.

Google has always been a platform where users can find all sorts of information and disinformation. But without the ability to correlate the information against the source’s reputation, there’s no way to know if the answers you got are accurate.

Reid says they've already made some improvements to Overviews. "From looking at examples from the past couple of weeks, we were able to determine patterns where we didn’t get it right, and we made more than a dozen technical improvements to our systems," she said.

This includes improving the detection mechanisms for nonsensical queries and blocking AI Overviews from displaying. This includes removing satire and humor content from the source material for those queries.

They also cut back on user-generated content in responses, instead focusing on higher quality sources of material. 

Finally, Google says it has implemented stronger guardrails, particularly for news and health content. "We aim to not show AI Overviews for hard news topics, where freshness and factuality are important. In the case of health, we launched additional triggering refinements to enhance our quality protections," said Reid.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.