Skip to main content
Software

Most cybersecurity leaders feel like they have no choice but to allow developers to use AI

More than half of security leaders say it is impossible for their teams to keep up with AI-powered developers, according to a recent report.
article cover

Moor Studio/Getty Images

3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Security teams say they feel stuck between a rock and a hard place when it comes to patrolling AI-powered developers. Lucky for them, several security experts say these feelings are just temporary.

The relationship between cybersecurity teams and software developers continues to mimic that of a worried parent and a rebellious teenager as developers leverage AI in their roles. According to a new report from Venafi that surveyed 800 security leaders from four different countries, about 66% of security decision-makers say it is impossible for their teams to keep up with AI-powered developers, despite concerns such as a dependency on AI leading to lower standards being top of mind. Almost three-quarters (72%) said they felt like they had no choice but to allow developers to use AI to remain competitive.

Been there, done that. Venafi Chief Innovation Officer Kevin Bocek told IT Brew that this love-hate dynamic between security teams and AI-powered developers is a familiar story, drawing parallels with security teams’ initial reactions toward cloud computing.

“Developers were excited about it. CIOs were excited about it. CTOs were excited about it, and it took time for security teams to be comfortable,” said Bocek, adding that the industry now has a “plethora” of security controls that protect data, applications, and compute networking years later.

Fortunately, Bocek said that security teams can look toward the widespread adoption of cloud computing as a “blueprint” to solve this “age-old problem” and take a strategic approach to implementing security protocols.

“If we start with specific applications where engineers are using AI-coding assistance, that’s where security teams can learn,” said Bocek. “That’s where they can start to put guardrails in place.”

The best is yet to come. Along with taking notes on past innovations, Bocek added that security teams can also look forward to what’s to come as the industry comes around to embracing AI-powered developers. Bocek told IT Brew that roles related to AI security will likely develop as the industry continues to embrace the technology set.

“There are probably going to be AI security engineers [and] AI security architects,” he said.

Jackie McGuire, senior security strategist at San Francisco-based data infrastructure company Cribl, added that security teams can expect to see a new relationship evolve between security, data science, and engineering teams to address AI-related concerns down the line.

“There need to be subject matter experts that act as a kind of conduit between these groups…to ensure that priorities are being communicated to different teams in different ways based on what’s important to that team,” she said.

As the industry continues to gain its footing with security for AI-assisted coding, Bocek advised against banning AI-assisted coding to mitigate security problems. Instead, he advised professionals to look beyond their growing pains and at the bigger picture.

“As we look to the future…we’ll see increasing confidence about governance,” said Bocek. “Let’s stay tuned and see what happens next year and the year after.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.