Skip to main content
Software

FS-ISAC offers 8 data recommendations for banks deploying GenAI

Spoiler alert: There’s a lot of data governance.

Illustration of a bank vault surrounded by ones and zeroes.

Hannah Minn

3 min read

GenAI is so money lately.

A March 2024 report from professional services firm KPMG found that of 200 US senior bank executives, 60% of financial services institutions have a GenAI-enabled cybersecurity solution in pilot or production phase, and that 65% of execs agreed that the technology is “an integral part of their institution’s long-term vision for driving innovation and ensuring the business remains relevant five years from now.”

Management consulting company PwC highlighted GenAI use cases for banks in its October 2024 study, including applications like streamlined loan processing, automated customer service, precise customer identification, and fraud detection.

As financial services orgs bring GenAI to the window, FS-ISAC outlined eight steps to help banks use the tech “effectively and cautiously” while data is selected, stored, and accessed.

“You have to be careful of what data it’s using and the output that it’s giving, especially if customers are inputting [personally identifiable information] or sensitive data into the model,” Michael Silverman, chief strategy and innovation officer at FS-ISAC, told IT Brew.

Steps provided by FS-ISAC, a member-driven non-profit supporting cybersecurity for the global financial sector, include taking a data-lineage inventory (Step #3); building effective test plans (Step #6); and staying up on model vulnerabilities (Step #7).

Silverman shared which of the FS-ISAC’s recommendations might be easiest for financial services pros to forget.

Responses have been edited for length and clarity.

Out of the eight recommended steps, which might not be most thought about amongst your peers?

Data lineage, or data provenance, is really becoming more and more important as we get to the GenAI world. Is the data created by a machine? Is it stock-market, time-series data that we just know, it is what it is; there’s very few errors in something like that [as] it’s machine-generated data? There could be human responses, like emails that people write. Even that can have some level of error, but it’s still important to know that that’s human generated. And then there could be a whole collection of output from GenAI systems.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

What happens once the data is classified?

Understanding that lineage tells you what you need to do with this data. Can I trust it? How do I trust it? What steps do I need to take in order to trust this data and use it effectively and properly.

Human data, I have to give some sort of scrutiny to it. GenAI data, I’m probably going to want to give even more scrutiny to it. Machine-level data, probably not as much. If I start training future models on previous models, without that verification, I could just be amplifying the same error over and over again.

Why is data provenance so important for financial services institutions?

We could be writing output, writing emails, making decisions, or analysis, or advising customers, based on this output. We want to get it right. The better the inputs, the more likely the chances are for better outputs.

Does the introduction of agents change the steps in any way?

The introduction of agents, which is still relatively new, I’ll say, only emphasizes more that we’ve got to get the data right. Because if an agent is making 10 steps and is basing the outcome of “step one” on where it’s taking it later, there’s less opportunity for humans in the loop.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.