top of page

OpenAI ChatGPT Account Credentials Compromised and Sold on Dark Web

In a concerning development, over 101,100 compromised OpenAI ChatGPT account credentials have been discovered on illicit dark web marketplaces between June 2022 and May 2023. Shockingly, India alone accounts for 12,632 of the stolen credentials, according to a report by cybersecurity firm Group-IB.


The report highlights that the number of compromised ChatGPT accounts reached a peak of 26,802 in May 2023, with the Asia-Pacific region being the hardest hit by this wave of credential theft. Countries such as Pakistan, Brazil, Vietnam, Egypt, the U.S., France, Morocco, Indonesia, and Bangladesh also experienced a significant number of compromised ChatGPT credentials.


The majority of these compromised accounts were found to have fallen victim to information stealers, with the notorious Raccoon info stealer being responsible for breaching 78,348 ChatGPT logs. Following closely behind were the info stealers Vidar (12,984) and RedLine (6,773).


Information stealers have gained popularity among cybercriminals due to their ability to pilfer passwords, cookies, credit card details, and other sensitive information from web browsers and cryptocurrency wallet extensions. The stolen information is then actively traded on dark web marketplaces.


As more enterprises integrate ChatGPT into their operations, there is a growing concern that compromised account credentials could lead to unauthorized access to classified correspondences or proprietary code. Dmitry Shestakov, head of threat intelligence at Group-IB, warns that if threat actors obtain these account credentials, it could inadvertently expose sensitive intelligence.


To address these risks, users are urged to follow good password hygiene practices and secure their accounts with two-factor authentication (2FA) to prevent account takeover attacks.


This development comes in the midst of a malware campaign exploiting fake OnlyFans pages and adult content lures to distribute a remote access trojan and an information stealer known as DCRat. This modified version of AsyncRAT has been active since January 2023, with victims being enticed to download ZIP files containing a VBScript loader.


Additionally, researchers at eSentire have recently uncovered a new variant of the GuLoader malware. Using tax-themed decoys, this malware launches PowerShell scripts capable of injecting the Remcos RAT into legitimate Windows processes. GuLoader, notorious for delivering info-stealers and Remote Administration Tools (RATs), employs highly obfuscated commands and encrypted shellcode, making it extremely evasive.


These recent developments serve as a stark reminder of the growing threats in the cybersecurity landscape and the need for robust security measures to safeguard sensitive information. Jocelyn Houle, Senior Director, Data Governance at Securiti, shared insights on the vast amount of data AI systems can collect and what protections can be put in place: "Many have discussed the risk of sensitive data in AI systems that cannot be ‘unlearned,’ but often overlook the added risk of sensitive data, such as employee access credentials, loaded into the many logs and events generative AI and other services can drive as they are adopted in enterprise. These logs, which once sat in dusty on-premise facilities, are now streamed through cloud services across regions.

Harnessing a common foundation of sensitive data intelligence (SDI) ensures that the entire organization is operating from the same analysis of their data, including discovery and classification, metadata enrichment, risk analysis, labeling and more. For all of this to happen while maintaining data safety, organizations need to establish strong controls and data safeguards. By implementing stringent access controls, anonymization and data masking methods, and data governance frameworks, privacy and compliance can be achieved, and the risk of misuse or data breaches can be minimized."


###

Comments


bottom of page