OmniGPT allegedly got breached to the tune of over 30,000 users’ data. The hacker posted this data online, highlighting how easy it was for them to access all this sensitive information…
In 2025, Artificial Intelligence is everywhere we go. AI applications like ChatGPT and Gemini are quickly becoming household favorites for people all over the world.
With the rise of these apps comes the rise of AI “aggregator” platforms that combine AI tools like these into one location. But just like the apps themselves, the aggregators can come with their own unique data security risks.
On Monday, February 10th, 2025, a user called “Gloomer” on BreachForum posted the following:
“Hi, I recently breached OmniGPT.co which is a smaller clone of ChatGPT and extracted all messages between their users and the AI (Over 34 million lines), additionally I also got the emails of 30k users and about 20% of these also come with phone number.”
The unnervingly casual message coupled with the outrageous claim instantly had users concerned, and their concerns were confirmed when “Gloomer” posted data samples to back up their claim.
After all this, the big question still remained: how did they do it? “Gloomer” neglected to specify this, though they did add the following:
“You can find a lot of useful information in the messages such as API keys and credentials and many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information.”
Weeks after the incident, there’s still been no response from OmniGPT itself, nor have any researchers been able to draw conclusions based on the information available. The one thing they can agree on though, is that any data entered into an AI application should not be considered secure.
AI security is by far the biggest challenge in cyberspace in 2025. With new attacks, breaches and vulnerabilities coming to light every week, we are seeing an unprecedented threat level. And applications tend to be lax until it’s too late, leaving doors open for their APIs to be exploited, as well as the AI models they connect.
Secure your APIs for better AI security with FireTail. FireTail can help you take control of your AI and API security posture today. To see how, sign up for a demo or try our free tier here.