Skip to main content
Software

How to deploy gen AI as privacy laws evolve

Be transparent, be sensitive, be “really careful,” experts say.
article cover

Francis Scialabba

3 min read

There are so many AI frameworks and laws in the works—from NIST, from the White House, from the EU—one would almost need some kind of all-powerful chatbot to decipher them.

In December, during IT Brew’s live event, “A Delicate Balance: Tech Innovation and Privacy,” an attendee had the following question for guest Elise Houlik, chief privacy officer at the fintech platform Intuit:

“Since privacy and data protection laws are still evolving and maturing around generative AI, what are the best practices which you recommend to ensure your products are compliant and customers trust your products/platforms?”

We posed the question to Houlik and, separately, to other data privacy pros this month.

These responses have been edited for length and clarity.

Sameer Ansari, managing director, Protiviti: You have to be really careful in terms of making sure you understand the data that’s being absorbed, used, and trained for that AI model, and then identifying everything that could be considered sensitive for any personally identifiable information.

Richard Bownes, principal of data and AI, Kin + Carta: If you’re going to use these in a client-facing, or even internal way, one really good option would be to use a privately hosted model that you train on data; you know what the data is and where it comes from. That also gives you the control to not train on anything you’d want the model to accidentally regurgitate…if you don’t have PII data in your training set, you can’t produce PII data randomly.

Houlik: You have to also just appreciate sensitivities around new technology. Gen AI is super exciting and super new. If you’re going to use it and deploy it, talk to people about that. Talk about, “This experience is leveraging, to some extent, generative AI, this is why and how it does, this is how the information that you would like to provide is going to be consumed.” Try to be as transparent as possible.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Bownes: You can follow best practices of software engineering, as well. You can have access to large language models (LLMs) being gated behind authentication, so that only known people can come in and therefore reduce the likelihood that a hacker gets access to LLMs…You can use encryption and tokenization on the outputs so that only users with a key can understand what’s coming out of the model. (That’s getting really fine-grained in terms of controls.)

Ansari: The challenge you have with gen AI, as it relates to privacy and data protection overall, is that gen AI has the ability to learn. And one of the big core tenets of privacy regulation is the right to be forgotten, or delete my data…Being able to ask a gen AI tool or [LLM] to say “forget that personal data” is almost a near impossibility.

Houlik: It still, I think, comes back to rooting yourselves and what privacy laws, by definition, are intended to accomplish, which is giving visibility to individuals to whom the information belongs into what’s going on with their data, what’s being collected, how is it being used, who is it going to be shared with, in a very real way that is tech agnostic. If you can do that, you can survive the changing landscape, whether or not the law comes to save the day.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.