Skip to main content
Cybersecurity

CISO tells IT Brew how attackers are deploying AI and deepfakes

“The gloves are off,” one expert says.
article cover

Francis Scialabba

3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

This is London calling—or is it?

Deepfakes and voice phishing have become front of mind concerns for security experts like Rex Booth, CISO for identity management software developer SailPoint.

During a meetup on the CES floor this month, Booth told IT Brew that threat actors are using AI to strengthen social engineering attacks and their overall capabilities.

“That was one of their growth problems; they were constrained by their own scale,” Booth said. “Now the gloves are off, and they’re going to be able to grow much more rapidly than they were able to in the past.”

Artificially intelligent. Booth told IT Brew that cybercrime is a booming industry with similar incentives to legal entities. The bottom line rules all.

“They’re well-run businesses,” Booth said. “So, they’re looking for ways in which they can garner economies of scale, and really just be as efficient as possible.”

One way to do that? AI. Attackers are using the technology to expand their capacity, without the regulatory and legal constraints defenders must manage—and that presents an uneven threat landscape. Nadine Moore, a managing director with Boston Consulting Group, told IT Brew at CES that cybersecurity professionals need to be flexible and creative in how they calculate the risk posed by attackers.

“The most successful defenders think like an adversary—that’s been true since day one of this game,” Moore said. “So think about, if you had these tools, what would you do?”

Testing, testing. SailPoint did just that in 2023, Booth said, running public recordings of CEO Mark McClain through AI software to replicate the top executive’s voice. Then the team had “McClain” deliver instructions and statements. The results “weren’t great,” Booth told us, in that while staff largely were able to tell the difference, “the results were more effective from an offensive perspective than a typical phishing email.”

If that’s the result at a security-minded company, there’s a higher likelihood that the average person could be deceived. Booth said that what worries him is less whether or not a deepfake can fool someone in the tech industry, and more once the technology gets to the point that the risk from deepfakes can fool a substantial number of the population.

“One of the problems with our industry is that we tend to look at the world through our own lens—we look at the world through a relatively knowledgeable and aware and sophisticated lens,” Booth said. “And we think that, of course, everyone will be able to detect a good face filter that’s being generated live; they will know that it’s a fake.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.