Programmer and CEO Sage Wohns says there’s no need to fear a future of hostile AI, because hostile AI is already here. And it’s in your inbox.
Wohns told IT Brew that malicious hackers and cybercriminals are using ChatGPT and other generative AI to create sophisticated phishing attempts that can fool almost anyone, even him. That’s why he created Jericho Security, a new tool that fights generative AI with generative AI, creating highly personalized white-hat hacking attempts to help companies test their employees and train them to avoid even the most sophisticated scams.
Why it matters. Realistic malicious email campaigns are on the rise, especially those that are personalized and harder to detect. Security firm Darktrace told IT Brew in April that it had detected a 135% increase in phishing attempts with fewer spelling and grammar mistakes, more complex sentences, and more elaborate ways to trick people into doing something that would expose their network.
Why AI? Jericho Security is betting on the “it takes one to know one” principle. “These are Turing-based machines,” said Wohns. “They’re designed to attack us and sound like they’re human. It’s trying to confuse us, trying to convince [us].” That, in a nutshell, is makes a good phishing scam. And the theory is that the more familiar people are with what they sound and look like as they evolve, the more likely they are to avoid them. “We’ve got to be able to be better at spotting those things moving forward,” said Wohns.
How it works. Jericho Security uses brokered data from the dark web (password and credential dumps and more) to train the AI to generate the test messages. Wohns said this is “mostly social media data or things that [hackers have] scraped from other websites.” In beta tests within the company, they used people’s phone numbers scraped from the web in simulated attacks, but then decided against that in the final version. They don’t use PII or private information that they would “perceive as a bridge too far,” according to Wohns.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
While this has forced Jericho Security to get creative, there’s no shortage of personal data on the web if you know where to look. “It’s really interesting to see what safe data that we can find around people to be used in these attacks,” he said.
Is it private? Jericho Security uses a privately hosted LLM that runs on top of GPT-4. “We’re actually experimenting with various baseline LLMs including GPT-4, 3.5, LLaMA, and others so that we have a variety of generated attack types,” Wohns told IT Brew via email. They’re not using ChatGPT or otherwise uploading a company’s data to be used to train outside LLMs. Wohns also says the data is “ring-fenced,” which means they only use it for this one specific purpose.
Is it nice? While phishing training is relatively straightforward and necessary, there are many ways to do it very wrong. In December 2020, at the height of the pandemic, GoDaddy sent a phishing test email to employees offering them an imaginary $650 bonus and a warning that they had to click the link by a certain deadline.
“I would click on the bonus one even if it looks suspicious,” said Wohns. “That is mean-spirited and not educational.”