Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime

6 July 2025

Meteorologists Say the National Weather Service Did Its Job in Texas

5 July 2025

The 55 Best Outdoor Deals From the REI 4th of July Sale

5 July 2025
Facebook X (Twitter) Instagram
Just In
  • How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime
  • Meteorologists Say the National Weather Service Did Its Job in Texas
  • The 55 Best Outdoor Deals From the REI 4th of July Sale
  • Security News This Week: Android May Soon Warn You About Fake Cell Towers
  • Everything You Can Do in the Photoshop Mobile App
  • Review: Bose Soundlink Plus Bluetooth Speaker
  • Is It Time to Stop Protecting the Grizzly Bear?
  • Borderlands 4 Preview – Crafting A Compelling Villain In The Timekeeper
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway
News

OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway

News RoomBy News Room30 October 20243 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

On Saturday, an Associated Press investigation revealed that OpenAI’s Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.

Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.

The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.

Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.

The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”

In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.

Why Whisper Confabulates

The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, “Researchers aren’t certain why Whisper and similar tools hallucinate,” but that isn’t true. We know exactly why Transformer-based AI models like Whisper behave this way.

Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleOnePlus 13 Complete Specifications Listed on TENAA Ahead of October 31 Launch
Next Article 30 years ago, this sci-fi movie became a minor hit — and spawned an even better TV franchise

Related Articles

News

How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime

6 July 2025
News

Meteorologists Say the National Weather Service Did Its Job in Texas

5 July 2025
News

The 55 Best Outdoor Deals From the REI 4th of July Sale

5 July 2025
News

Security News This Week: Android May Soon Warn You About Fake Cell Towers

5 July 2025
News

Everything You Can Do in the Photoshop Mobile App

5 July 2025
News

Review: Bose Soundlink Plus Bluetooth Speaker

5 July 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024101 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

Oppo Reno 14, Reno 14 Pro India Launch Timeline and Colourways Leaked

27 May 202582 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Review: Bose Soundlink Plus Bluetooth Speaker

News Room5 July 2025
News

Is It Time to Stop Protecting the Grizzly Bear?

News Room5 July 2025
Gaming

Borderlands 4 Preview – Crafting A Compelling Villain In The Timekeeper

News Room4 July 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025124 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024101 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

Security News This Week: Android May Soon Warn You About Fake Cell Towers

5 July 2025

Everything You Can Do in the Photoshop Mobile App

5 July 2025

Review: Bose Soundlink Plus Bluetooth Speaker

5 July 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.