Skip to main content
Cybersecurity

Verizon’s DBIR sees more ‘self-inflicted’ GenAI troubles

Verizon’s Chris Novak reveals ways that internal and external AI leak data.

Blackjack3d/Getty Images

Blackjack3d/Getty Images

4 min read

The AI threat facing IT professionals in 2024 was less evil, autonomous robot and more curious, human employee accidentally leaking data, according to Verizon’s annual Data Breach Investigations report, released on Apr. 23.

In addition to discovered increases in vulnerability exploits and third-party compromises, the report also noted GenAI threats are coming from inside the house, as companies build their own AI infrastructures and experiment with outside services.

A questionnaire of 2,850 global executives, conducted by employee-experience org G-P and fielded in Jan. 2025, found that 91% of respondents are “scaling up” GenAI initiatives. (And 35% of business leaders reported they would just “use the tools anyway, even if they were not authorized.”)

Verizon, in its look at more than 12,195 breaches between Nov. 1, 2023 and Oct. 31, 2024, noted that 15% of employees were accessing GenAI platforms, and of that group, 72% were using non-corporate emails as accounts identifiers. The findings suggest use outside of corporate policy, according to the report’s writers.

“The biggest challenge a lot of organizations face are really often self-inflicted at this point. Now that’s not to say that threat actors won’t continue to evolve the weaponization of AI, but for a lot of organizations, it’s their internal use that gets them in trouble,” Chris Novak, VP of global cybersecurity solutions at Verizon, told IT Brew.

We spoke with Novak about all that trouble, and how IT pros can stay out of it.

Responses have been edited for length and clarity.

How can an organization’s internal AI deployments lead to an employee exposing sensitive data?

Organizations build their own AI infrastructure: “We’re going to take all of our organizational data, throw it into this AI, and it’ll give you recommendations on how to do your job better, faster, smarter.” And the problem is, many organizations have not secured that well. So, a perfect example: If I access the AI infrastructure and I say, “Hey, how much does Carlos make in a year?” The infrastructure has access to that data and should recognize that I, as Chris Novak, shouldn’t have access to that information, just because it can produce it.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

And that’s just one example. I could say, “Hey, who are we looking to acquire in the next year?”

Is there an external AI problem?

Employees, still to this day, are taking sensitive internal information from their companies and pushing them out to ChatGPT, Gemini, and all sorts of other places. You wouldn’t believe how many times we get called by an organization who will say, “We believe this salesperson or this marketing person took all of their customer-account data and uploaded it to ChatGPT to give them a recommended account targeting plan or the new marketing strategy.”

What can an IT pro do to prevent these exposures?

You need to have an AI governance model for the organization…If you’re an IT pro, how do you put the right monitoring and governance around the devices? You say, “Look, maybe people are okay to access ChatGPT, but we’re monitoring what they submit…It doesn’t need to be heavy-handed, but through a process like that, people will learn, and eventually the right behaviors will take shape. When you full-on block everything, we usually find people just go around it, and now you have shadow IT problems.

Can you define “AI governance model”?

There are a number of different types of AI-associated risks that we’ve identified as an industry. It could be hallucinations, it could be bias, it could be data privacy, data exposure...What we encourage organizations to do is have some kind of governance group that establishes and agrees on what that model is, what are the risks and concerns you want to care for, and then when you are looking to either utilize or create something that has an AI component, you check it against this reference model.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.