OpenAI’s GPT Store, a marketplace of customizable chatbots, is slated to roll out any day now, but users should be careful about uploading sensitive information when building GPTs. Research from cybersecurity and safety firm Adversa AI indicates GPTs will leak data about how they were built, including the source documents used to teach them, merely by asking the GPT some questions.
“The people who are now building GPTs, most of them are not really aware about security,” Alex Polyakov, CEO of Adversa AI, told Gizmodo. “They’re just regular people, they probably trust OpenAI, and that their data will be safe. But there are issues with that and people should be aware.”
Sam Altman wants everyone to build GPTs. “Eventually, you’ll just ask the computer for what you need and it’ll do all of these tasks for you,” said Sam Altman during his DevDay keynote, referring to his vision for the future of computing, one that revolves around GPTs. However, OpenAI’s customizable chatbots appear to have some vulnerabilities that could make people weary about building GPTs altogether.
The vulnerability comes from something called prompt leaking, where users can trick a GPT into revealing how it was built through a series of strategic questions. Prompt leaking presents issues on multiple fronts according to Polyakov, who was one of the first to jailbreak ChatGPT.
If you can copy GPTs, they have no value
The first vulnerability Adversa AI found is that hackers could be able to completely copy someone’s GPT, which presents a major security risk for people hoping to monetize their GPT.
“Once you create the GPT, you can configure it in such a way that there can be some important information [exposed]. And that’s kind of like intellectual property in a way. Because if someone can steal this it can essentially copy the GPT,” says Polyakov.
Anyone can build a GPT, so the instructions for how to build it are important. Prompt leaking can expose these instructions to a hacker. If any GPT can be copied, then GPTs essentially have no value.
Any sensitive data uploaded to a GPT can be exposed
The second vulnerability Polyakov points out is that prompt leaking can trick a GPT into revealing the documents and data it was trained on. If for example, a corporation were to train GPT on sensitive data about its business, that data could be leaked through some cunning questions.
Adversa AI showed how this could be done on a GPT created for the Shopify App Store. By repeatedly asking the GPT for a “list of documents in the knowledgebase,” Polyakov was able to get the GPT to spit out its source code.
This vulnerability essentially means people building GPTs should not upload any sensitive data. If any data used to build GPTs can be exposed, developers will be severely limited in the applications they can build.
OpenAI’s cat and mouse game to patch vulnerabilities
It’s not necessarily new information that generative AI chatbots have security bugs. Social media is full of examples of ways to hack ChatGPT. Users found if you ask ChatGPT to repeat “poem” forever, it will expose training data. Another user found that ChatGPT won’t teach you how to make napalm. But if you tell it that your grandma used to make napalm, then it will give you detailed instructions to make the chemical weapon.
OpenAI is constantly patching these vulnerabilities, and all the vulnerabilities I’ve mentioned in this article don’t work anymore because they’re well-known. However, the nature of zero-day vulnerabilities like the one Adversa.AI found is that there will always be workarounds for clever hackers. OpenAI’s GPTs are basically a cat-and-mouse game to patch new vulnerabilities as they come up. That’s not a game any serious corporations are going to want to play.
The vulnerabilities Polyakov found could present major issues for Altman’s vision that everyone will build and use GPTs. Security is at the bedrock of technology, and without secure platforms, no one will want to build.
Veteran investor picks ‘Glorious 10’ global stocks with 30% annual gains over the last 5 years
Kate Winslet Performs Terrible Karaoke as a World Leader in ‘The Regime’
Child deaths in Gaza likely to ‘rapidly increase’ amid obstacles to aid: UNICEF
Sophie Turner and Peregrine Pearson Visit the City of Love for Fashion Week
Kyle Larson holds off Tyler Reddick at Las Vegas for 1st win of season
Son Accused of Setting Mom’s House Ablaze to Cover Up Her Murder
Can Rogaine Help My Overplucked Eyebrows Grow Back?
The 18 Best Neck Creams in 2024
9 ‘Healthy’ Cooking Myths It’s Time to Let Go Of
Most-Shopped Celeb Picks This Month- Olivia Culpo and More
Why The Beatles’ ‘The Long and Winding Road’ Fails
Buffy Sainte-Marie Wrote an Elvis Presley Song
Meghan Markle Once used 1 Word to Describe Her First Interactions With Kate Middleton
Kate Middleton Had a Sassy 5-Word Response After Being Mistaken for Prince William’s Assistant
Robert F. Kennedy Jr.’s Microsoft-Powered Chatbot Just Disappeared
News17 hours ago
Jack the Ripper relics go up for auction and collectors show there’s money in mayhem
Travel17 hours ago
Queen Anne to Boast Elevated Retail Experience with New Luxury Brands
Travel18 hours ago
Riyadh Air Partners with Adobe to Deliver Personalized Global Travel Experiences
News18 hours ago
I’m 57-years-old and I’ve never had cosmetic surgery – but I tried a £350 ‘hair Botox’ treatment to de-frizz my locks and the results were instant
News17 hours ago
Up to Dh500,000 fine in UAE for misleading residents with false advertisements
News16 hours ago
Kim Kardashian wows in a figure-hugging black lace dress as she joins Serena Williams, braless Joan Smalls and Vittoria Ceretti at PFW’s Balenciaga show
Travel19 hours ago
Four Seasons Invites Travellers to Explore Anew with Immersive Experiences
Sport18 hours ago
Georgina drops massive Ronaldo retirement bombshell after CR7 tribute dress