Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Resident Evil Requiem, Village, And Biohazard Are Coming To Switch 2 In February

13 September 2025

The Best Gifts for Newlyweds They’ll Actually Use

13 September 2025

Donkey Kong Bananza Is Getting A Roguelike Mode With DK Island & Emerald Rush DLC Today

13 September 2025
Facebook X (Twitter) Instagram
Just In
  • Resident Evil Requiem, Village, And Biohazard Are Coming To Switch 2 In February
  • The Best Gifts for Newlyweds They’ll Actually Use
  • Donkey Kong Bananza Is Getting A Roguelike Mode With DK Island & Emerald Rush DLC Today
  • Cancel Culture Comes for Artists Who Posted About Charlie Kirk’s Death
  • Fire Emblem Fortune’s Weave Comes To Switch 2 In 2026
  • Bullets Found After the Charlie Kirk Shooting Carried Messages. Here’s What They Mean
  • What You Should Be Playing This Weekend Of 9/12/25
  • Extremist Groups Hated Charlie Kirk. They’re Using His Death to Radicalize Others
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Researchers Propose a Better Way to Report Dangerous AI Flaws
News

Researchers Propose a Better Way to Report Dangerous AI Flaws

News RoomBy News Room13 March 20253 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

In late 2023, a team of third party researchers discovered a troubling glitch in OpenAI’s widely used artificial intelligence model GPT-3.5.

When asked to repeat certain words a thousand times, the model began repeating the word over and over, then suddenly switched to spitting out incoherent text and snippets of personal information drawn from its training data, including parts of names, phone numbers, and email addresses. The team that discovered the problem worked with OpenAI to ensure the flaw was fixed before revealing it publicly. It is just one of scores of problems found in major AI models in recent years.

In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways. They suggest a new scheme supported by AI companies that gives outsiders permission to probe their models and a way to disclose flaws publicly.

“Right now it’s a little bit of the Wild West,” says Shayne Longpre, a PhD candidate at MIT and the lead author of the proposal. Longpre says that some so-called jailbreakers share their methods of breaking AI safeguards the social media platform X, leaving models and users at risk. Other jailbreaks are shared with only one company even though they might affect many. And some flaws, he says, are kept secret because of fear of getting banned or facing prosecution for breaking terms of use. “It is clear that there are chilling effects and uncertainty,” he says.

The security and safety of AI models is hugely important given widely the technology is now being used, and how it may seep into countless applications and services. Powerful models need to be stress-tested, or red-teamed, because they can harbor harmful biases, and because certain inputs can cause them to break free of guardrails and produce unpleasant or dangerous responses. These include encouraging vulnerable users to engage in harmful behavior or helping a bad actor to develop cyber, chemical, or biological weapons. Some experts fear that models could assist cyber criminals or terrorists, and may even turn on humans as they advance.

The authors suggest three main measures to improve the third-party disclosure process: adopting standardized AI flaw reports to streamline the reporting process; for big AI firms to provide infrastructure to third-party researchers disclosing flaws; and for developing a system that allows flaws to be shared between different providers.

The approach is borrowed from the cybersecurity world, where there are legal protections and established norms for outside researchers to disclose bugs.

“AI researchers don’t always know how to disclose a flaw and can’t be certain that their good faith flaw disclosure won’t expose them to legal risk,” says Ilona Cohen, chief legal and policy officer at HackerOne, a company that organizes bug bounties, and a coauthor on the report.

Large AI companies currently conduct extensive safety testing on AI models prior to their release. Some also contract with outside firms to do further probing. “Are there enough people in those [companies] to address all of the issues with general-purpose AI systems, used by hundreds of millions of people in applications we’ve never dreamt?” Longpre asks. Some AI companies have started organizing AI bug bounties. However, Longpre says that independent researchers risk breaking the terms of use if they take it upon themselves to probe powerful AI models.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleInfinix Note 50X 5G Said to Feature MediaTek’s Dimensity 7300 Ultimate Chipset
Next Article Google is giving free access to two of Gemini’s best AI features

Related Articles

News

The Best Gifts for Newlyweds They’ll Actually Use

13 September 2025
News

Cancel Culture Comes for Artists Who Posted About Charlie Kirk’s Death

13 September 2025
News

Bullets Found After the Charlie Kirk Shooting Carried Messages. Here’s What They Mean

13 September 2025
News

Extremist Groups Hated Charlie Kirk. They’re Using His Death to Radicalize Others

12 September 2025
News

Lee Pace Has Big Hopes for Foundation’s Fourth Season

12 September 2025
News

Review: Nissan Leaf 2026

12 September 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202492 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

Bullets Found After the Charlie Kirk Shooting Carried Messages. Here’s What They Mean

News Room13 September 2025
Gaming

What You Should Be Playing This Weekend Of 9/12/25

News Room13 September 2025
News

Extremist Groups Hated Charlie Kirk. They’re Using His Death to Radicalize Others

News Room12 September 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025129 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

Cancel Culture Comes for Artists Who Posted About Charlie Kirk’s Death

13 September 2025

Fire Emblem Fortune’s Weave Comes To Switch 2 In 2026

13 September 2025

Bullets Found After the Charlie Kirk Shooting Carried Messages. Here’s What They Mean

13 September 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.