Humans are still king when it comes to writing convincing phishing emails—but generative AI tools could be making it easier for attackers to automate customized phishing campaigns at scale.
Large language models (LLMs) like Google Bard or ChatGPT allow operators to mix scraped data into generated text, with the Wall Street Journal recently reporting that cybercriminals are using them to personalize phishing lures and avoid common red flags like spelling and grammar mistakes. Recent research by security firm SlashNext has shown threat actors are also developing custom LLMs to fuel business email compromise.
Chatter on cybercrime forums about AI has picked up significantly, SlashNext researchers found, and at least one tool called “WormGPT” is already being advertised as a paid service.
WormGPT is built on GPT-J, an open-source LLM that can be run by anyone. While it’s not as powerful as enterprise tools like ChatGPT, it also doesn’t have any built-in protections against abuse, SlashNext CEO Patrick Harr told IT Brew.
“In this case, it’s a free-for-all,” Harr told IT Brew. “Those guardrails are not in place.”
SlashNext researchers obtained access to WormGPT—which the cybercrime forum user that advertised the tool claims was trained using a confidential data set—and tasked it with writing sample phishing emails. They found that the tool was capable of writing bait that was “not only remarkably persuasive but also strategically cunning,” such as one urging an account manager to sign fraudulent invoices.
Tools like WormGPT lower the barrier for entry to phishing scams for cyber criminals, according to Harr, and may be particularly useful for those who do not speak English as their primary language.
“It’s not a zero-cost, zero knowledge environment, but it does make the threat actor’s job much, much easier,” Harr told IT Brew, adding cybercrime groups “no longer have to hire the English speakers or the native language speakers.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Access to WormGPT is just a small bitcoin payment away. ZDNet reported that a Telegram channel allegedly associated with the developers has advertised subscriptions running from $60 to $700. The developer also advertises WormGPT’s ability to help with malicious Python code, according to SlashNext.
Tools like WormGPT could also allow attackers to improve on bulk phishing campaigns, which usually involve little customization. Cyber criminals could train their own generative AI tools based on historical data as to what causes a target to click or doesn’t.
“I would suspect what you’re going to see is as they get more data will improve, they will also become more targeted and more effective,” Harr said.
John Bambenek, principal threat hunter at cybersecurity firm Netenrich, told SC Media he believed the same malware developers were behind another similar emerging tool called FraudGPT more focused on code. Subsequent SlashNext research has linked both WormGPT and FraudGPT to “DarkBERT,” yet another generative AI tool circulating on cybercrime forums, which is possibly itself based on an LLM of the same name trained on Dark Web data.
Not everyone is impressed, however.
Melissa Bischoping, director of endpoint security research at Tanium, told SC Media she “would even challenge if you’re doing them ‘better, faster,’ because we all know GPT-generated code is error-prone and there’s not yet a ton of conclusive, well-designed research on whether GPT-generated phishing lures are more effective than human-generated ones.”
“In all honesty, this seems like a lot of hot air to scam script kiddies out of cash and capitalize on the surge in interest around LLM-based attacker tools,” Bischoping concluded.