Google is pushing forward with more AI into how internet search works. Remember AI Overviews, which essentially summarizes the content pulled from websites, and presents it at the top of the Google Search page?

That error-prone feature is now expanding to the US market, powered by the new Gemini 2.0 AI models. It no longer requires a Google account sign-in, and has opened to users across all age groups. While that is a risky move in itself, Google is giving a similar blanket treatment to the whole Search page with a new AI Mode.

Currently available as a Lab experiment, AI Mode essentially turns the traditional Google Search experience with website links into a conversational dialogue, just the way AI chatbots give you answers. It’s a delicious convenience, but could prove to be dramatically erroneous, if history of AI Overviews is anything to go by.

What is AI Mode for Google Search?

The overarching idea is to give users all the information they need — pulled from indexed websites — and save the hassle of clicking on sources and reading through webpages to find the answers. You can ask follow-up queries in a natural language format, instead of a keyword-stuffed search, and even provide the details that would otherwise require a few more follow-up search requests.

“It uses a “query fan-out” technique, issuing multiple related searches concurrently across subtopics and multiple data sources and then brings those results together to provide an easy-to-understand response,” explains the company.

Google, however, warns that it may not always get it right with AI mode, even though its internal tests have delivered encouraging results. In scenarios where the AI Mode is not confident about the summarized response, it will simply show a list of web search results, like the traditional Google Search experience.

In its current avatar, it can provide answers as a wall of text or neatly formatted tables, but down the road, images and videos will also be included. AI mode is currently available only for Google One AI Premium subscribers, and will roll out as an opt-in experience.

It’s a bad omen for any person reliant on Google Search, especially if we are talking about accuracy. Here’s an example. I looked up whether we are living in the year 2025. Google’s AI Overview said it is the year 2024. The first source it cited for that information was Wikipedia, which explicitly says the current year is 2025.

A rich history of risks

The idea behind AI Mode for Google Search is theoretically rooted in user convenience. However, the fundamental tech stack behind it is still dealing with a few problems that the entire AI landscape is yet to fix. One of them is AI hallucinations, which is essentially an AI tool making up information and confidently presenting it as a fact.

Google’s AI Overviews are the best example of those missteps, and the mistakes continue to pop up to this day. Take for example this evidence, which was shared merely a few hours ago on Reddit, in which the AI Overview confidently lied about a right-side driving rule in India.

That’s a false statement, and yet, at no point, the language of the AI Overview text suggests that the user should fact-check this information. “It’s so inaccurate and so buggy that I’m surprised it even exists,” says another report detailing its sheer inaccuracy.

AI Overviews only appear as a condensed nugget of information, served at the top of the Google Search page. Now, imagine a whole page that is presented to users as a long presentation, with a few source links interspersed through the wall of text.

Google says AI Overviews will excel at “coding, advanced math and multimodal queries.” Yet, not too long ago, it fumbled facts and turned history on its head, especially with the kind of natural language queries that are being hyped for AI mode.

When asked whether astronauts met cats on the moon, it confidently agreed that it was true, adding that astronauts even took care of those lunar cats. Virginia Tech digital literacy expert, Julia Feerrar, remarked that AI doesn’t actually know the answers to our questions, citing an example where Google AI overview confidently mentioned Barack Obama as the first Muslim President.

https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK

— SG-r01 (@heavenrend) May 22, 2024

The consequences of AI misinformation could be disastrous, especially when it comes to health and wellness-related queries. In an analysis of over 30 million Search Engine Results Pages (SERPs), SerpStat found that health-related search is the most popular category where AI Overviews appear.

This is the same tool that suggested a person should eat at least one rock per day, adding one-eighth cup of glue to pizza, drinking urine to pass out kidney stones, and mentioned that a baby elephant can fit in a human palm as of 2025.

This is not the Search evolution I seek

Despite Google’s assertions about how the AI models have evolved, the situation hasn’t improved dramatically. Less than a day ago, Futurism spotted AI Overviews confidently claiming that MJ Lenderman has won 14 Grammy awards.

It even got the year wrong, when I asked something as simple as “is it 2025” in the Google Search box. ”No, it is not currently the year 2025. The current year is 2024,” said the AI Overview.

Going a step further, it explained how 2025 is a common year that starts on a Wednesday, adding a bunch of non-related information discussing everything from national celebrations to UN declarations that have absolutely nothing to do with my query.

Now, I am not entirely against AI. On the contrary, I extensively use tools like Gemini Deep Research, and often rely on the latest Gemini 2.0 Flash AI model for creative ideas when my brain cells are not firing off at peak capacity.

However, pushing an error-prone AI overhaul to a source of information as indispensable as Google Search is a risky proposition. Digital Trends has reached out to Google and will update this story once we get a response.






Share.
Exit mobile version