Skip to main content
Cybersecurity

‘Local’ AI options assist threat actors and defenders

And the adversaries tend to be better at using them, one security CEO tells IT Brew.

AI in retail

Marco Marca/Getty Images

4 min read

Most people know what they should be doing within their community: shopping locally, watching local news, and…running large language models?

That last one may be a bit of a head-scratcher, but with open-source models like Llama, Mistral, and now DeepSeek, AI experimenters can host models locally to their enterprise data centers or even, in some cases, their phone.

That kind of offline containment offers the usual tradeoff—a benefit for security pros and threat actors alike. And CEO of fraud-prevention company Arkose Labs Kevin Gosschalk gives the edge to attackers.

“This technology is really democratized for everybody, and adversaries always use it better,” Gosschalk told IT Brew.

“They don’t have restrictions and policies,” he said. “They don’t need to move slowly. They don’t need to worry about AI rollout. They don’t need to train their employees. They just go and use these things.”

Tools like Ollama and LM Studio allow users to run billion-parameter AI models like Meta’s Llama 3, Microsoft’s Phi-3, and Mistral’s offerings on users’ individual machines. Apps like Fullmoon allow CEOs like Gosschalk to “chat with” AI models like DeepSeek and Llama on a mobile device.

The Mistral Small 3 model, announced in January, is available to download and deploy locally. Meta’s Llama models also offer open-source options and local hosting.

Gosschalk offered up two opposing scenarios for localized AI models—one to help security teams, one to assist adversaries.

For the win! Say a software developer at a financial institution wants to use GenAI to write code, strictly regulated banks must consider potential risk with third parties. An offline model, however, deployed in the org’s own data center, keeps some security risk in-house. “If it’s helping you generate code, that’s staying in your system. It’s never going back to a third party,” Gosschalk said.

Gabe Dimeglio, CISO, SVP, and GM at Rimini Street’s Protect and Watch Solutions, sees similar security benefits to local, offline AI. Though Dimeglio says he won’t be running DeepSeek anytime soon, he recommends organizations use their Ollama, LM Studio, or other LLM-providing services in containers like Docker—technologies that isolate an application’s runtime environment from other running processes—“to ensure that that process or that box that it’s running on can’t reach out to the internet and go communicate with other servers.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

For the loss! Through processes like content filters and user reporting, closed-source, less local models like OpenAI can prevent and respond to malicious requests, like phishers looking for tactical help.

With a locally hosted AI model, Gosschalk said, threat actors can ask suspicious queries undetected and avoid rate limits as they send out distributed, large-scale email campaigns.

Recent research from vendors like Cisco and Qualys reported DeepSeek’s failure to block harmful prompts, characterized by behaviors including cybercrime, misinformation, illegal activities, and general harm.

On the call, Gosschalk successfully prompted DeepSeek to create a phishing email, with the simple prompt of “create a phishing email.”

Deep waters. Though DeepSeek slid from the top spot of the App Store rankings in Jan. down to the 22nd on Feb. 21, tech giants at last week’s AI summit in France see the model—and its claimed low training costs—as a worthy competitor.

“For us, what DeepSeek really reinforces and reaffirms is that there is this very real competition with very real stakes,” Chris Lehane, chief global affairs officer at OpenAI, told CNBC on Feb. 17.

As DeepSeek reaches a growing, changing stage of AI vendors, security pros like Gosschalk are still considering the right recommendations for other IT practitioners. Proper coding and the usual don’t-click diligence still works against AI-assisted attacks, he said.

“We’re just going to have to double down on training and awareness and giving examples and making just people know about this.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.