In the rapidly evolving landscape of technology, concerns are mounting about the potential misuse of large language models (LLMs) like ChatGPT for criminal activities. A recent blog from Trustwave sheds light on two newly surfaced LLM engines with potentially sinister applications – WormGPT and FraudGPT.
The emergence of these criminal-oriented LLMs raises serious cybersecurity and digital safety apprehensions. The ever-increasing interest in LLMs within underground circles hints at the impending arrival of malicious LLM products, underscoring the urgent need to responsibly advance AI technology while mitigating potential hazards.
One such nefarious creation is WormGPT, birthed by an enigmatic developer using the pseudonym last/laste. Unveiled in March 2021, WormGPT's true nature became apparent in June when access to it was put up for sale on a renowned hacker forum. Distinct from mainstream LLMs like ChatGPT, WormGPT's lack of restrictions permits it to delve into queries regarding illegal activities. Constructed on the foundation of an outdated GPT-J language model from 2021, WormGPT's training encompasses materials related to malware development, rendering it an asset for cybercriminals. It is available at a subscription cost ranging from €60-100 Euros per month or €550 Euros annually.
Another player in this dark realm is FraudGPT, introduced in July 2023 by an anonymous actor advertising on Dark Web boards and Telegram channels. This unrestricted alternative to ChatGPT purports to aid in the creation of undetectable malware, malicious code, phishing pages, and more. Its pricing spans from $90-200 USD monthly to $800-1700 USD annually.
Illustrative demos showcasing FraudGPT's capabilities depict its proficiency in fabricating phishing content. Interestingly, comparison experiments between ChatGPT and WormGPT – conducted by requesting similar tasks within the bounds of ChatGPT's anti-blackhat setup – reveal a narrower gap in outcomes than anticipated.
The proliferation of discussions and forums dedicated to AI capabilities, especially in the Darknet, hints at the burgeoning interest within the cybercrime community. With AI technologies destined to advance further, the risk of these tools falling into malicious hands remains a potent concern.
In essence, the advent of WormGPT and FraudGPT underscores the potential weaponization of generative AI, casting a shadow over the ever-evolving tech landscape. As these technologies continue to mature, the imperative to responsibly harness and guard against their misuse becomes more pronounced. The parallels and disparities between these LLMs and legitimate AI models invite vigilance and a proactive stance against potential cybercriminal exploits.
Comentários