A hacker said they purloined personal details from millions of OpenAI accounts-but researchers are skeptical, and the company is investigating.
OpenAI states it's investigating after a hacker claimed to have swiped login qualifications for 20 million of the AI firm's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher published a puzzling message in Russian advertising "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing prospective purchasers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being offered for sale "for simply a few dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking composed Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If genuine, this would be the third major security event for pipewiki.org the AI company because the release of ChatGPT to the public. In 2015, a hacker got access to the company's internal Slack messaging system. According to The New York Times, the hacker "took details about the design of the business's A.I. innovations."
Before that, in 2023 an even easier bug involving jailbreaking triggers allowed hackers to obtain the personal information of OpenAI's paying clients.
This time, nevertheless, security scientists aren't even sure a hack occurred. Daily Dot reporter Mikael Thalan wrote on X that he email addresses in the supposed sample information: "No evidence (suggests) this alleged OpenAI breach is genuine. At least 2 addresses were invalid. The user's just other post on the forum is for a thief log. Thread has actually given that been erased also."
No proof this supposed OpenAI breach is genuine.
Contacted every email address from the purported sample of login credentials.
A minimum of 2 addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has actually given that been erased too. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, pattern-wiki.win 2025
OpenAI takes it 'seriously'
In a statement shown Decrypt, an OpenAI spokesperson acknowledged the situation while maintaining that the business's systems appeared secure.
"We take these claims seriously," the representative said, adding: "We have not seen any evidence that this is linked to a compromise of OpenAI systems to date."
The scope of the supposed breach stimulated issues due to OpenAI's massive user base. Countless users worldwide depend on the company's tools like ChatGPT for business operations, instructional functions, and material generation. A legitimate breach might expose personal conversations, business tasks, and other sensitive data.
Until there's a final report, some preventive steps are always recommended:
- Go to the "Configurations" tab, log out from all linked devices, and allow two-factor authentication or 2FA. This makes it essentially difficult for a hacker to gain access to the account, even if the login and passwords are compromised.
- If your bank supports it, then develop a virtual card number to manage OpenAI subscriptions. In this manner, it is simpler to spot and prevent fraud.
- Always keep an eye on the discussions saved in the chatbot's memory, and understand any phishing efforts. OpenAI does not request for any individual details, and any payment upgrade is always handled through the main OpenAI.com link.