In a real “ask the large language model to build the plane while flying” kind of situation, today’s data privacy professionals have the tough task of embracing generative AI ideas while teaching employees how to run with them securely. Privacy officers like Intuit’s Elise Houlik instruct a company on how to use large language models (LLMs) while protecting sensitive data like source code or personally identifiable information.
After December’s live IT Brew event, “A Delicate Balance: Tech Innovation and Privacy,” an attendee had the following question for guest Elise Houlik, chief privacy officer at the fintech platform:
“How do you approach privacy training when technology emerges so quickly and potentially makes that training obsolete? Can training be quantified as a hit on company ROI?”
We posed the question to Houlik and separately, to other data-privacy pros this month.
These responses have been edited for length and clarity.
Houlik: If I’m building a new ecosystem, where the concepts of how to appropriately use and handle personal information are essential to navigating the web of new technology coming around, to me, training is essential: How do you get your ecosystem to know and understand the guardrails between the right and wrong ways to use data? There’s 8 million ways you can go about it. You can have policies that are written and available. You can have guidelines, tools, and tip sheets.
Sameer Ansari, managing director, Protiviti: This is a challenge because I think the training agenda is always reactive to what’s happening, much like regulations are reactive to what’s happening. Technology will always lead, and then based on that, you have privacy practitioners organizations thinking about, “How do I actually train people to help them understand some of the implications from a privacy perspective?”...But I think privacy practitioners need to stay really in lockstep with what’s happening from an emerging perspective and be able to respond quickly
Richard Bownes, principal of data and AI, Kin + Carta: There are publishing companies that published textbooks for learning how to code or how to make machine learning models. And because of how fast everything’s moving, literally by the time the textbooks hit the press, because they have to contain such a wealth of information, they’re out of date…I could see it being the same thing with AI training: You’ve got your enterprise OpenAI license, and you’re using GPT-4, or the newest version, and there’s training in place that HR and legal have agreed with. And then a new version comes out, and there’s a massive change to some safeguards or regulations and the training data, and now they have to do it all over again. That wouldn’t be sustainable. I think the best case thing you could do for a company would be write down the general handbook, an onboarding document for the use of gen AI in Company X.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Ansari: [Training] is always going to be a cost center that you’re going to have to continuously invest in. It’s probably not going to necessarily improve your ROI from a company perspective, but be really focused as new technologies come out and do more focused trainings with people that are actually leveraging those technologies within your organization. And then you can go broader once you realize that the technology is starting to be adopted.
Houlik: [Reach] out at a mass scale to an organization and [have] a point in time where people focus on this, whatever “hit” that may have in terms of resourcing or time management…It’s not an eight-day training. It’s 20 to 30 minutes of time, and that 20 to 30 minutes of time where I am speaking to an entire organization on, “This is the right way to do things. This is how we have aligned as a company upon how we want to construct our products and services.” I find that pays for itself down the road.