Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Apple’s Small but Powerful iPad Mini Is 20% Off Today

21 September 2025

Say Hello to the 2025 Ig Nobel Prize Winners

21 September 2025

Meta’s Smart Glasses Might Make You Smarter. They’ll Certainly Make You More Awkward

20 September 2025
Facebook X (Twitter) Instagram
Just In
  • Apple’s Small but Powerful iPad Mini Is 20% Off Today
  • Say Hello to the 2025 Ig Nobel Prize Winners
  • Meta’s Smart Glasses Might Make You Smarter. They’ll Certainly Make You More Awkward
  • A Dangerous Worm Is Eating Its Way Through Software Packages
  • Big Tech Dreams of Putting Data Centers in Space
  • Diminish Distractions by Setting Your iPhone to Gray Scale When You’re Home
  • Review: 1Password Password Manager
  • Our Favorite Dog Beds to Keep Your Canines Comfy
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
News

OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

News RoomBy News Room8 August 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

In late July, OpenAI began rolling out an eerily humanlike voice interface for ChatGPT. In a safety analysis released today, the company acknowledges that this anthropomorphic voice may lure some users into becoming emotionally attached to their chatbot.

The warnings are included in a “system card” for GPT-4o, a technical document that lays out what the company believes are the risks associated with the model, plus details surrounding safety testing and the mitigation efforts the company’s taking to reduce potential risk.

OpenAI has faced scrutiny in recent months after a number of employees working on AI’s long-term risks quit the company. Some subsequently accused OpenAI of taking unnecessary chances and muzzling dissenters in its race to commercialize AI. Revealing more details of OpenAI’s safety regime may help mitigate the criticism and reassure the public that the company takes the issue seriously.

The risks explored in the new system card are wide-ranging, and include the potential for GPT-4o to amplify societal biases, spread disinformation, and aid in the development of chemical or biological weapons. It also discloses details of testing designed to ensure that AI models won’t try to break free of their controls, deceive people, or scheme catastrophic plans.

Some outside experts commend OpenAI for its transparency but say it could go further.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, a company that hosts AI tools, notes that OpenAI’s system card for GPT-4o does not include extensive details on the model’s training data or who owns that data. “The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed,” Kaffee says.

Others note that risks could change as tools are used in the wild. “Their internal review should only be the first piece of ensuring AI safety,” says Neil Thompson, a professor at MIT who studies AI risk assessments. “Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge.”

The new system card highlights how rapidly AI risks are evolving with the development of powerful new features such as OpenAI’s voice interface. In May, when the company unveiled its voice mode, which can respond swiftly and handle interruptions in a natural back and forth, many users noticed it appeared overly flirtatious in demos. The company later faced criticism from the actress Scarlett Johansson, who accused it of copying her style of speech.

A section of the system card titled “Anthropomorphization and Emotional Reliance” explores problems that arise when users perceive AI in human terms, something apparently exacerbated by the humanlike voice mode. During the red teaming, or stress testing, of GPT-4o, for instance, OpenAI researchers noticed instances of speech from users that conveyed a sense of emotional connection with the model. For example, people used language such as “This is our last day together.”

Anthropomorphism might cause users to place more trust in the output of a model when it “hallucinates” incorrect information, OpenAI says. Over time, it might even affect users’ relationships with other people. “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships,” the document says.

Joaquin Quiñonero Candela, a member of the team working on AI safety at OpenAI, says that voice mode could evolve into a uniquely powerful interface. He also notes that the kind of emotional effects seen with GPT-4o can be positive—say, by helping those who are lonely or who need to practice social interactions. He adds that the company will study anthropomorphism and the emotional connections closely, including by monitoring how beta testers interact with ChatGPT. “We don’t have results to share at the moment, but it’s on our list of concerns,” he says.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleSamsung Galaxy S24 Price in India Discounted to as Low as Rs. 62,999
Next Article DDR6 rumors: everything we know about the next-gen RAM

Related Articles

News

Apple’s Small but Powerful iPad Mini Is 20% Off Today

21 September 2025
News

Say Hello to the 2025 Ig Nobel Prize Winners

21 September 2025
News

Meta’s Smart Glasses Might Make You Smarter. They’ll Certainly Make You More Awkward

20 September 2025
News

A Dangerous Worm Is Eating Its Way Through Software Packages

20 September 2025
News

Big Tech Dreams of Putting Data Centers in Space

20 September 2025
News

Diminish Distractions by Setting Your iPhone to Gray Scale When You’re Home

20 September 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202492 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Diminish Distractions by Setting Your iPhone to Gray Scale When You’re Home

News Room20 September 2025
News

Review: 1Password Password Manager

News Room20 September 2025
News

Our Favorite Dog Beds to Keep Your Canines Comfy

News Room20 September 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025129 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

A Dangerous Worm Is Eating Its Way Through Software Packages

20 September 2025

Big Tech Dreams of Putting Data Centers in Space

20 September 2025

Diminish Distractions by Setting Your iPhone to Gray Scale When You’re Home

20 September 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.