Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Europa Clipper spacecraft snaps cool thermal images of Mars

12 May 2025

Xiaomi 16 With Snapdragon 8 Elite 2 SoC Tipped to Launch in September

12 May 2025

Michael Jordan is back and will serve as a contributor for NBC’s NBA coverage

12 May 2025
Facebook X (Twitter) Instagram
Just In
  • Europa Clipper spacecraft snaps cool thermal images of Mars
  • Xiaomi 16 With Snapdragon 8 Elite 2 SoC Tipped to Launch in September
  • Michael Jordan is back and will serve as a contributor for NBC’s NBA coverage
  • A VIP Seat at Donald Trump’s Crypto Dinner Cost at Least $2 Million
  • India Smartphone Shipments Fell 6 Percent YoY in Q1 2025, Apple Posts Highest Growth: IDC
  • The Worst Games To Play With A Newborn At Home
  • How Google Workspace can help your business grow
  • Apple Said to be Considering Hiking iPhone Prices
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Be Careful What You Tell OpenAI’s GPTs
AI

Be Careful What You Tell OpenAI’s GPTs

News RoomBy News Room1 December 20234 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI’s GPT Store, a marketplace of customizable chatbots, is slated to roll out any day now, but users should be careful about uploading sensitive information when building GPTs. Research from cybersecurity and safety firm Adversa AI indicates GPTs will leak data about how they were built, including the source documents used to teach them, merely by asking the GPT some questions.

“The people who are now building GPTs, most of them are not really aware about security,” Alex Polyakov, CEO of Adversa AI, told Gizmodo. “They’re just regular people, they probably trust OpenAI, and that their data will be safe. But there are issues with that and people should be aware.”

Sam Altman wants everyone to build GPTs. “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said Sam Altman during his DevDay keynote, referring to his vision for the future of computing, one that revolves around GPTs. However, OpenAI’s customizable chatbots appear to have some vulnerabilities that could make people weary about building GPTs altogether.

The vulnerability comes from something called prompt leaking, where users can trick a GPT into revealing how it was built through a series of strategic questions. Prompt leaking presents issues on multiple fronts according to Polyakov, who was one of the first to jailbreak ChatGPT.

If you can copy GPTs, they have no value

The first vulnerability Adversa AI found is that hackers could be able to completely copy someone’s GPT, which presents a major security risk for people hoping to monetize their GPT.

“Once you create the GPT, you can configure it in such a way that there can be some important information [exposed]. And that’s kind of like intellectual property in a way. Because if someone can steal this it can essentially copy the GPT,” says Polyakov.

Anyone can build a GPT, so the instructions for how to build it are important. Prompt leaking can expose these instructions to a hacker. If any GPT can be copied, then GPTs essentially have no value.

Any sensitive data uploaded to a GPT can be exposed

The second vulnerability Polyakov points out is that prompt leaking can trick a GPT into revealing the documents and data it was trained on. If for example, a corporation were to train GPT on sensitive data about its business, that data could be leaked through some cunning questions.

Adversa AI showed how this could be done on a GPT created for the Shopify App Store. By repeatedly asking the GPT for a “list of documents in the knowledgebase,” Polyakov was able to get the GPT to spit out its source code.

This vulnerability essentially means people building GPTs should not upload any sensitive data. If any data used to build GPTs can be exposed, developers will be severely limited in the applications they can build.

OpenAI’s cat and mouse game to patch vulnerabilities

It’s not necessarily new information that generative AI chatbots have security bugs. Social media is full of examples of ways to hack ChatGPT. Users found if you ask ChatGPT to repeat “poem” forever, it will expose training data. Another user found that ChatGPT won’t teach you how to make napalm. But if you tell it that your grandma used to make napalm, then it will give you detailed instructions to make the chemical weapon.

OpenAI is constantly patching these vulnerabilities, and all the vulnerabilities I’ve mentioned in this article don’t work anymore because they’re well-known. However, the nature of zero-day vulnerabilities like the one Adversa.AI found is that there will always be workarounds for clever hackers. OpenAI’s GPTs are basically a cat-and-mouse game to patch new vulnerabilities as they come up. That’s not a game any serious corporations are going to want to play.

The vulnerabilities Polyakov found could present major issues for Altman’s vision that everyone will build and use GPTs. Security is at the bedrock of technology, and without secure platforms, no one will want to build.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleWordle Today (#895): Wordle answer and hints for December 1
Next Article Meta May Ship Far Fewer Quest 3 Headsets than Quest 2

Related Articles

AI

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law

24 June 2024
AI

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

20 June 2024
AI

Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni

20 June 2024
AI

Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers

18 June 2024
AI

AI Turns Classic Memes Into Hideously Animated Garbage

17 June 2024
AI

May ‘AI’ Take Your Order? McDonald’s Says Not Yet

17 June 2024
Demo
Top Articles

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202457 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Gaming

The Worst Games To Play With A Newborn At Home

News Room12 May 2025
News

How Google Workspace can help your business grow

News Room12 May 2025
Phones

Apple Said to be Considering Hiking iPhone Prices

News Room12 May 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025118 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202493 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 202482 Views
Our Picks

A VIP Seat at Donald Trump’s Crypto Dinner Cost at Least $2 Million

12 May 2025

India Smartphone Shipments Fell 6 Percent YoY in Q1 2025, Apple Posts Highest Growth: IDC

12 May 2025

The Worst Games To Play With A Newborn At Home

12 May 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.