Skip to main content
Events

AI models require resilience from pesky ‘poking’ hackers

A panel at RSA spoke about lowering AI attack surfaces.
article cover

Aitor Diago/Getty Images

3 min read

AI may be complex, but its attackers don’t have to be. Hackers can mess with learning models using a simple malicious indirect prompt here, or a tiny dataset modification there.

As organizations begin deploying artificial intelligence and machine-learning systems, a panel at April’s RSA Conference in San Francisco urged the importance of making them resilient against attacks that have been fairly basic…so far.

“The malicious actors in this space have a lot of room even to evolve. But they don’t actually need to yet to take advantage of these vulnerabilities of our systems, which is why we’re seeing so many low-level-of-sophistication attacks be successful,” said Christina Liaghati, AI strategy execution and operations manager for MITRE’s AI and Autonomy Innovation Center.

Many attackers are “poking” at AI models, Liaghati told the RSA audience.

Liaghati spoke at the presentation titled “Hardening AI/ML Systems—The Next Frontier of Cybersecurity,” along with Bob Lawton, chief of mission capabilities at the Office of the Director of National Intelligence, and Neil Serebryany, CEO at security vendor CalypsoAI.

Some early pokes at AI.

  • Feb. 2022: A New Jersey man (with a curly wig!) exploited facial biometric recognition systems that used machine-learning techniques to authenticate identities and initiate unemployment insurance claims.
  • March 2021: Tax scammers in China were caught hacking a government-run facial recognition system to produce fake tax invoices, according to the South China Morning Post.
  • Early 2023: Language models like ChatGPT have lowered the barrier to entry, assisting malware makers and email compromisers alike. “The quality and the number of spearphishing attacks has just gone up wildly,” said Serebryany during the panel.
Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Attacks on machine-learning systems fall into categories of “data poisoning” (compromising the model’s learning information), “evasion” (getting around the model’s constraints), or a denial-of-service tactic that overwhelms the system.

Start your AI engines…In April a LinkedIn report found that nearly 70% of surveyed companies said that AI is improving organizational speed and intelligence, and cited examples like Kaiser Permanente’s diabetes screening and Boeing’s guidance of unpiloted military planes.

Some advice:

  • Lower the attack surface. Going with a linear regression model over a more complex neural network leads to less for a hacker to poke at. “The fewer parameters within the model, the smaller the model…The smaller the attack surface, the easier it is to actually secure a model,” said Serebryany.
  • Understand the underlying information that’s training the model. “Much of the ways that you can mitigate these threats is even just thinking about the amount of information that you’re putting out into the public domain on what models you’re using,” said Liaghati.
  • Limit queries and establish access controls. MITRE listed these in its draft list of mitigations, among others.

The good news: both adversaries and defenders are learning.

“Much of this industry collaboration happening around all this is not just experts in this space, but people coming up to speed on what their risk looks like here,” said Liaghati.

And one can’t get it right without a little poking around.—BH

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.