IT leaders expect AI and APIs to collide in a messy way, according to recent research.
Security firm Kong’s recent survey of 700 IT leaders found 88% list API security as one of their top concerns—with near-universal agreement (97%) that API abuse is just as much of or bigger a threat than other issues “like network security and endpoint security.”
The results also dovetailed with prior surveys finding respondents don’t feel confident about their ability to handle AI-enhanced attacks. Three-quarters (74%) of respondents listed themselves as “very concerned” about API attacks using AI capabilities, and 40% weren’t sure that their current security measures are robust enough to stop them.
According to Marco Palladino, CTO and co-founder of Kong, virtually every organization has at least some “shadow APIs,” or APIs that for one reason or another aren’t monitored or controlled. Shadow APIs are security threats because they can potentially create unseen backdoors.
“API security is a huge concern, but obviously there is no way to enforce API security if there is no platform or infrastructure that we can leverage to enforce the policies that we want to secure,” Palladino told IT Brew.
Since the inner workings of many large language models remain murky, Palladino said, they create new and potentially unforeseeable security risks. For example, a customer support chatbot could end up inadvertently leaking customer information when fed a specific series of prompts.
“There is a risk that by sharing PII [personally identifiable information] or data that’s not being sanitized—for PII, inside of the modules themselves—then someone can make a query that from the other end will be able to extract this PII and by doing so create a security bridge,” Palladino said.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
On the flip side, generative AI also enables automation of traditional API attacks, Palladino added, creating security breaches “that are quicker, they’re smarter, they are adaptive, they are reactive, and therefore create a higher security risk.” (Kong researchers have projected API attacks will increase by 548% by the year 2030.)
Over half of respondents to Kong’s survey (55%) said they had experienced an “API security incident” within the last year.
Of those respondents who had such an incident, around one-third said it was severe, and 20% said it cost $500,000 or more to resolve. Around 25% of all respondents said they had already encountered some kind of AI-enhanced security threat “related to APIs or LLMs.”
Companies rushing to implement generative AI are often failing to apply the same scrutiny to those systems as they would others, Palladino said, because they are “trying to accelerate innovation, because they are afraid of being left behind.”
Platform teams are beginning to build infrastructure for AI apps that handle things like observability, security, and traffic control, according to Palladino, in the same way they have historically deployed API management solutions. Yet those efforts are only the first step.
“We need to have AI infrastructure for those applications,” Palladino concluded. “Otherwise, the risk is perhaps not even having any security in place, or trusting the teams that whenever they’re building a new application, they’re building the right security checks in place. That doesn’t scale.”