A hacker said they purloined private details from countless OpenAI accounts-but researchers are doubtful, and the business is examining.
OpenAI says it's investigating after a hacker claimed to have actually swiped login qualifications for 20 million of the AI company's user accounts-and put them up for sale on a dark web forum.
The pseudonymous breacher published a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering potential buyers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the full dataset was being marketed "for just a few dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If legitimate, this would be the third significant security event for the AI business considering that the release of ChatGPT to the public. In 2015, yewiki.org a hacker got access to the business's internal Slack messaging system. According to The New York City Times, the hacker "took details about the style of the business's A.I. technologies."
Before that, in 2023 an even simpler bug including jailbreaking triggers allowed hackers to obtain the personal data of OpenAI's paying clients.
This time, nevertheless, security researchers aren't even sure a hack happened. Daily Dot reporter Mikael Thalan wrote on X that he discovered invalid email addresses in the expected sample data: "No proof (suggests) this alleged OpenAI breach is genuine. A minimum of 2 addresses were invalid. The user's only other post on the forum is for a thief log. Thread has because been erased too."
No evidence this supposed OpenAI breach is legitimate.
Contacted every email address from the supposed sample of login credentials.
At least 2 addresses were void. The user's only other post on the online forum is for a thief log. Thread has actually because been erased as well. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shown Decrypt, an OpenAI spokesperson acknowledged the circumstance while maintaining that the company's systems appeared safe and secure.
"We take these claims seriously," the representative said, including: "We have actually not seen any proof that this is linked to a compromise of OpenAI systems to date."
The scope of the supposed breach triggered issues due to OpenAI's massive user base. Millions of users worldwide rely on the like ChatGPT for company operations, instructional purposes, and material generation. A legitimate breach might expose private conversations, industrial projects, and other sensitive data.
Until there's a last report, some preventive steps are always recommended:
- Go to the "Configurations" tab, log out from all linked devices, and allow two-factor authentication or 2FA. This makes it virtually difficult for a hacker to gain access to the account, asteroidsathome.net even if the login and passwords are compromised.
- If your bank supports it, then create a virtual card number to manage OpenAI subscriptions. In this manner, it is easier to spot and prevent fraud.
- Always keep an eye on the discussions saved in the chatbot's memory, and know any phishing efforts. OpenAI does not request for larsaluarna.se any personal details, and any payment upgrade is constantly managed through the main OpenAI.com link.