Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Review: Luxus Couples Vibrator

4 June 2025

Vivo X Fold 5 Compared to iPhone 16 Pro Max; to Be Only Slightly Thicker When Folded

4 June 2025

iOS 26 to Bring Message Translation, Animated Lock Screen Album Artwork and Revamped CarPlay UI: Report

4 June 2025
Facebook X (Twitter) Instagram
Just In
  • Review: Luxus Couples Vibrator
  • Vivo X Fold 5 Compared to iPhone 16 Pro Max; to Be Only Slightly Thicker When Folded
  • iOS 26 to Bring Message Translation, Animated Lock Screen Album Artwork and Revamped CarPlay UI: Report
  • Samsung Teases ‘Ultra’ Foldable; May Debut Alongside Galaxy Z Fold 7 and Galaxy Z Flip 7
  • Upcoming Smartphones in June 2025: OnePlus 13s, Vivo T4 Ultra and More
  • Apple Challenges ‘Unreasonable’ EU Order to Open Up to Rivals
  • NxtQuantum’s AI+ Nova 2 5G Alleged Live Images Surfaces Online; Shows Dual Rear Camer Unit
  • Vivo Y19s Pro With 6,000mAh Battery, 50-Megapixel Rear Camera Launched: Price, Features
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Be Careful What You Tell OpenAI’s GPTs
AI

Be Careful What You Tell OpenAI’s GPTs

News RoomBy News Room1 December 20234 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI’s GPT Store, a marketplace of customizable chatbots, is slated to roll out any day now, but users should be careful about uploading sensitive information when building GPTs. Research from cybersecurity and safety firm Adversa AI indicates GPTs will leak data about how they were built, including the source documents used to teach them, merely by asking the GPT some questions.

“The people who are now building GPTs, most of them are not really aware about security,” Alex Polyakov, CEO of Adversa AI, told Gizmodo. “They’re just regular people, they probably trust OpenAI, and that their data will be safe. But there are issues with that and people should be aware.”

Sam Altman wants everyone to build GPTs. “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said Sam Altman during his DevDay keynote, referring to his vision for the future of computing, one that revolves around GPTs. However, OpenAI’s customizable chatbots appear to have some vulnerabilities that could make people weary about building GPTs altogether.

The vulnerability comes from something called prompt leaking, where users can trick a GPT into revealing how it was built through a series of strategic questions. Prompt leaking presents issues on multiple fronts according to Polyakov, who was one of the first to jailbreak ChatGPT.

If you can copy GPTs, they have no value

The first vulnerability Adversa AI found is that hackers could be able to completely copy someone’s GPT, which presents a major security risk for people hoping to monetize their GPT.

“Once you create the GPT, you can configure it in such a way that there can be some important information [exposed]. And that’s kind of like intellectual property in a way. Because if someone can steal this it can essentially copy the GPT,” says Polyakov.

Anyone can build a GPT, so the instructions for how to build it are important. Prompt leaking can expose these instructions to a hacker. If any GPT can be copied, then GPTs essentially have no value.

Any sensitive data uploaded to a GPT can be exposed

The second vulnerability Polyakov points out is that prompt leaking can trick a GPT into revealing the documents and data it was trained on. If for example, a corporation were to train GPT on sensitive data about its business, that data could be leaked through some cunning questions.

Adversa AI showed how this could be done on a GPT created for the Shopify App Store. By repeatedly asking the GPT for a “list of documents in the knowledgebase,” Polyakov was able to get the GPT to spit out its source code.

This vulnerability essentially means people building GPTs should not upload any sensitive data. If any data used to build GPTs can be exposed, developers will be severely limited in the applications they can build.

OpenAI’s cat and mouse game to patch vulnerabilities

It’s not necessarily new information that generative AI chatbots have security bugs. Social media is full of examples of ways to hack ChatGPT. Users found if you ask ChatGPT to repeat “poem” forever, it will expose training data. Another user found that ChatGPT won’t teach you how to make napalm. But if you tell it that your grandma used to make napalm, then it will give you detailed instructions to make the chemical weapon.

OpenAI is constantly patching these vulnerabilities, and all the vulnerabilities I’ve mentioned in this article don’t work anymore because they’re well-known. However, the nature of zero-day vulnerabilities like the one Adversa.AI found is that there will always be workarounds for clever hackers. OpenAI’s GPTs are basically a cat-and-mouse game to patch new vulnerabilities as they come up. That’s not a game any serious corporations are going to want to play.

The vulnerabilities Polyakov found could present major issues for Altman’s vision that everyone will build and use GPTs. Security is at the bedrock of technology, and without secure platforms, no one will want to build.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleWordle Today (#895): Wordle answer and hints for December 1
Next Article Meta May Ship Far Fewer Quest 3 Headsets than Quest 2

Related Articles

AI

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law

24 June 2024
AI

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

20 June 2024
AI

Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni

20 June 2024
AI

Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers

18 June 2024
AI

AI Turns Classic Memes Into Hideously Animated Garbage

17 June 2024
AI

May ‘AI’ Take Your Order? McDonald’s Says Not Yet

17 June 2024
Demo
Top Articles

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202491 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202466 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Phones

Apple Challenges ‘Unreasonable’ EU Order to Open Up to Rivals

News Room4 June 2025
Phones

NxtQuantum’s AI+ Nova 2 5G Alleged Live Images Surfaces Online; Shows Dual Rear Camer Unit

News Room3 June 2025
Phones

Vivo Y19s Pro With 6,000mAh Battery, 50-Megapixel Rear Camera Launched: Price, Features

News Room3 June 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025123 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202491 Views
Our Picks

Samsung Teases ‘Ultra’ Foldable; May Debut Alongside Galaxy Z Fold 7 and Galaxy Z Flip 7

4 June 2025

Upcoming Smartphones in June 2025: OnePlus 13s, Vivo T4 Ultra and More

4 June 2025

Apple Challenges ‘Unreasonable’ EU Order to Open Up to Rivals

4 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.