It seems like there’s no limit to how much trouble generative artificial intelligence (AI) can cause when it falls into the wrong hands. Case in point, a newly discovered AI tool called WormGPT has been making waves in the underbelly of the internet, and let me tell you, it’s not good news.
SlashNext is the one to thank for this alarming find. WormGPT is being peddled around in hush-hush online circles, promising its users an easy way to execute top-notch phishing and business email compromise (BEC) attacks. Gives me the creeps just thinking about it.
Here’s how Daniel Kelley, a sharp security researcher, puts it: this AI tool is “a dark version of GPT models, built with harmful intentions in mind.” It’s a piece of tech that can churn out very realistic fake emails that look like they’ve been customized for the person on the receiving end. That makes the job of these bad actors a whole lot easier.
The brain behind WormGPT likes to describe it as the “nemesis of the well-known ChatGPT.” Apparently, it’s a one-stop shop for all sorts of sketchy stuff and is based on an open-source model developed by EleutherAI called GPT-J.
When a tool like WormGPT ends up with malicious folks, you bet it spells trouble. Especially when you consider how AI models like OpenAI ChatGPT and Google Bard are scrambling to put up safeguards against the misuse of large language models (LLMs) to craft convincing phishing emails or generate harmful code.
A report from Check Point this week pointed out that “Bard’s defences in cybersecurity are not as robust as those of ChatGPT.” That means, it’s all too easy to create harmful stuff using Bard’s capabilities. It gets worse: earlier this year, this Israeli cybersecurity firm let slip how some cybercriminals are dodging ChatGPT’s restrictions by exploiting its API, besides dealing in stolen premium accounts and selling brute-force software to hack into ChatGPT accounts with long lists of emails and passwords.
What’s really scary about WormGPT is that it doesn’t seem to have any boundaries. This means even greenhorn cybercriminals can launch massive attacks without much technical know-how.
And as if things couldn’t get any worse, there are now ads for ChatGPT “jailbreaks” that aim to manipulate the tool to disclose sensitive information, create inappropriate content, or run damaging code. Yikes!
In Kelley’s words, “Generative AI can come up with emails that have perfect grammar, which makes them look legit and less likely to be flagged as suspicious.”
He also added, “Generative AI makes it possible for even novice attackers to carry out sophisticated BEC attacks. It’s like a handy tool that’s easily accessible for a wider range of cybercriminals.”
To cap it all, researchers from Mithril Security have found a way to alter an existing open-source AI model called GPT-J-6B to spread disinformation and upload it to a public repository like Hugging Face. Once there, it can be integrated into other applications, creating what is being called an LLM supply chain poisoning.
For this approach, dubbed PoisonGPT, to work, the altered model must be uploaded under a name that mimics a known company. In this case, a name that’s a near miss for EleutherAI was used. Talk about underhanded tactics!