In the ominous shadow of the WormGPT tool, another artificial intelligence (AI) tool designed for cybercrime, dubbed FraudGPT, has been brought to light. The tool is being openly advertised on various underground digital platforms, including dark web marketplaces and private Telegram channels.
According to a report released on Tuesday by Netenrich security researcher Rakesh Krishnan, FraudGPT is a novel AI bot designed primarily for offensive cyber operations. Its capabilities include creating sophisticated spear-phishing emails, generating cracking tools, and engaging in carding activities.
Netenrich’s investigation reveals that the offensive AI tool began circulation on July 22, 2023, and is currently available via a subscription model, ranging from $200 per month to $1,700 for an annual subscription.
The threat actor responsible for FraudGPT, who identifies as CanadianKingpin online, lauds the AI bot as an advanced alternative to ChatGPT. “If you’re seeking a feature-rich tool with no boundaries that can cater to individual needs, your search ends here!” the advertisement reads.
CanadianKingpin also suggests that FraudGPT can be used to write undetectable malware, uncover security leaks and vulnerabilities, and craft malicious code. According to the author, FraudGPT already has over 3,000 confirmed sales and reviews. The exact large language model (LLM) that was used to develop the system remains undisclosed.
The emergence of FraudGPT underscores an escalating trend in which cybercriminals exploit AI technologies like OpenAI’s ChatGPT to devise more adversarial and sophisticated tools. Such AI tools are devoid of any ethical restrictions and are intended to facilitate diverse forms of cybercrime.
These tools, elevating the phishing-as-a-service (PhaaS) model, could potentially provide a springboard for novice cybercriminals to execute scaled phishing and business email compromise (BEC) attacks. This could result in the unauthorized acquisition of sensitive data and illicit wire transfers.
Krishnan warns, “While organizations can construct tools like ChatGPT with ethical safeguards, the technology can be replicated without these protections with relative ease.” He stresses the importance of employing a comprehensive defense strategy and leveraging all available security telemetry for prompt analytics. This approach is vital for detecting and neutralizing threats before they can evolve into ransomware attacks or data breaches.