Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

The Best Massage Guns to Hack Your Recovery

9 May 2025

Samsung Galaxy S25 FE Tipped to Use a MediaTek Dimensity 9400 SoC

9 May 2025

Motorola Razr Ultra 2025 review: A fantastic flip phone

9 May 2025
Facebook X (Twitter) Instagram
Just In
  • The Best Massage Guns to Hack Your Recovery
  • Samsung Galaxy S25 FE Tipped to Use a MediaTek Dimensity 9400 SoC
  • Motorola Razr Ultra 2025 review: A fantastic flip phone
  • The 21 Best Early Amazon Pet Day Deals
  • Nvidia keeps hiding its bad cards, and that’s a problem
  • Computer Ban Gave the Government Unfair Advantage in Anti-War Activist’s Case, Lawyer Says
  • Vivo Y300 GT With MediaTek Dimensity 8400 SoC, 7,620mAh Battery Launched: Price, Specifications
  • Google Chrome is getting an AI-powered scam sniffer for Android phones
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » In Defense of AI Hallucinations
News

In Defense of AI Hallucinations

News RoomBy News Room5 January 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there’s almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT. In the middle of what seems like a carefully constructed answer, the LLM will slip in something that seems reasonable but is a total fabrication. Your typical chatbot can make disgraced ex-congressman George Santos look like Abe Lincoln. Since it looks inevitable that chatbots will one day generate the vast majority of all prose ever written, all the AI companies are obsessed with minimizing and eliminating hallucinations, or at least convincing the world the problem is in hand.

Obviously, the value of LLMs will reach a new level when and if hallucinations approach zero. But before that happens, I ask you to raise a toast to AI’s confabulations.

Hallucinations fascinate me, even though AI scientists have a pretty good idea why they happen. An AI startup called Vectara has studied them and their prevalence, even compiling the hallucination rates of various models when asked to summarize a document. (OpenAI’s GPT-4 does best, hallucinating only around 3 percent of the time; Google’s now outdated Palm Chat—not its chatbot Bard!—had a shocking 27 percent rate, although to be fair, summarizing documents wasn’t in Palm Chat’s wheelhouse.) Vectara’s CTO, Amin Ahmad, says that LLMs create a compressed representation of all the training data fed through its artificial neurons. “The nature of compression is that the fine details can get lost,” he says. A model ends up primed with the most likely answers to queries from users but doesn’t have the exact facts at its disposal. “When it gets to the details it starts making things up,” he says.

Santosh Vempala, a computer science professor at Georgia Tech, has also studied hallucinations. “A language model is just a probabilistic model of the world,” he says, not a truthful mirror of reality. Vempala explains that an LLM’s answer strives for a general calibration with the real world—as represented in its training data—which is “a weak version of accuracy.” His research, published with OpenAI’s Adam Kalai, found that hallucinations are unavoidable for facts that can’t be verified using the information in a model’s training data.

That’s the science/math of AI hallucinations, but they’re also notable for the experience they can elicit in humans. At times, these generative fabrications can seem more plausible than actual facts, which are often astonishingly bizarre and unsatisfying. How often do you hear something described as so strange that no screenwriter would dare script it in a movie? These days, all the time! Hallucinations can seduce us by appearing to ground us to a world less jarring than the actual one we live in. What’s more, I find it telling to note just which details the bots tend to concoct. In their desperate attempt to fill in the blanks of a satisfying narrative, they gravitate toward the most statistically likely version of reality as represented in their internet-scale training data, which can be a truth in itself. I liken it to a fiction writer penning a novel inspired by real events. A good author will veer from what actually happened to an imagined scenario that reveals a deeper truth, striving to create something more real than reality.

When I asked ChatGPT to write an obituary for me—admit it, you’ve tried this too—it got many things right but a few things wrong. It gave me grandchildren I didn’t have, bestowed an earlier birth date, and added a National Magazine Award to my résumé for articles I didn’t write about the dotcom bust in the late 1990s. In the LLM’s assessment of my life, this is something that should have happened based on the facts of my career. I agree! It’s only because of real life’s imperfectness that the American Society of Magazine Editors failed to award me the metal elephant sculpture that comes with that honor. After almost 50 years of magazine writing, that’s on them, not me! It’s almost as if ChatGPT took a poll of possible multiverses and found that in most of them I had an Ellie award. Sure, I would have preferred that, here in my own corner of the multiverse, human judges had called me to the podium. But recognition from a vamping artificial neural net is better than nothing.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous Article10 most anticipated horror movies of 2024, ranked
Next Article Hurry! The 85-inch model of Samsung’s Frame TV is $1,305 off

Related Articles

News

The Best Massage Guns to Hack Your Recovery

9 May 2025
News

Motorola Razr Ultra 2025 review: A fantastic flip phone

9 May 2025
News

The 21 Best Early Amazon Pet Day Deals

9 May 2025
News

Nvidia keeps hiding its bad cards, and that’s a problem

9 May 2025
News

Computer Ban Gave the Government Unfair Advantage in Anti-War Activist’s Case, Lawyer Says

9 May 2025
News

Google Chrome is getting an AI-powered scam sniffer for Android phones

9 May 2025
Demo
Top Articles

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202457 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Computer Ban Gave the Government Unfair Advantage in Anti-War Activist’s Case, Lawyer Says

News Room9 May 2025
Phones

Vivo Y300 GT With MediaTek Dimensity 8400 SoC, 7,620mAh Battery Launched: Price, Specifications

News Room9 May 2025
News

Google Chrome is getting an AI-powered scam sniffer for Android phones

News Room9 May 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025118 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views
Our Picks

The 21 Best Early Amazon Pet Day Deals

9 May 2025

Nvidia keeps hiding its bad cards, and that’s a problem

9 May 2025

Computer Ban Gave the Government Unfair Advantage in Anti-War Activist’s Case, Lawyer Says

9 May 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.