Skip to main content
IT Operations

Important questions to ask when you’re buying third-party AI

How to get some assurances from your vendor.
article cover

Emily Parsons

4 min read

A company wanting to try out the latest AI-powered generator of transcripts, grammar checks, chatbot answers, and marketing materials better also be generating some questions for their AI vendor, experts tell IT Brew.

Questions like: Where is that data being transferred and stored? And for how long?

“Is the company that you’re sending the data to training on your data in any way? If so: What data? How? Are they anonymizing that data before they train on it? Or do you have the option to tailor any of the sort of controls that you have in place for AI?” Whitney Merrill, head of global privacy and data protection at Asana, said during November’s IT Brew event, “Building an AI-ready data governance strategy.”

We asked other IT pros what questions to consider before buying a third-party AI tool.

Responses have been edited for length and clarity.

Richard Bownes, chief data scientist for Europe, Valtech: Someone might have created a new large language model architecture; they might have a bespoke data set. A follow-up question should be: Where does your data come from? Is it proprietary? Is it ethically sourced? Is it open source? Does someone else have access to it? If they have a unique technological offering and a unique data offering, then they have something which is “moated”: It has differentiated value from anybody else in the marketplace. That’s when it’s really worth considering buying, because then you couldn’t emulate that service.

Dennis Perpetua, global CTO digital workplace services and experience officer, vice president, and distinguished engineer, Kyndryl: If you’re making an investment in a vertical AI solution, will it feed experiences across your enterprise in other domains? If you’re bringing something in for an HR purpose, would that then integrate with other AI solutions that maybe an employee is working on?

David Brauchler, technical director and head of AI and ML for North America, NCC Group: Ultimately, we’re looking for low-assurance situations…a solution that is implemented in an environment where it is okay for it to be wrong. Fundamentally, these are all probabilistic-based algorithms, and so there’s always a chance that they will give you a wrong answer. If something like individual safety is at risk, or this is your business’s mission, then you have to have other controls in place to protect it. You need to be asking questions like: What are those security controls that we can implement?

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Bownes: What assurances do they have? Are they ISO certified?

Brauchler: This isn’t a question just for the C-levels to address. They need to be talking with their security engineers.

Jennifer Yokoyama, SVP and deputy general counsel, IP and Technology, Cisco: Are there people saying they can do something as far as a technological protection around how they’re handling your data through their AI tools, and what are they willing to put in a contract around commitments? Sometimes there is a gap or a disconnect between those two things, and often, I think it helps to understand how confident they are in their governance structures internally.

Bownes: How can you ensure these models are doing what they’re saying they’re doing, and in a fair way…If someone’s ready for a loan, or if it’s in computer vision and it’s looking at X-rays or MRIs, how do you know that the data that it’s trained on is representative of the people using it? How could you audit a decision that it’s making? How do you know that it’s made on fair data underneath?

Perpetua: I’ve heard really horrendous stories that have disenfranchised various groups of humans because machine learning took the popular ideas and had retrained itself only on certain biases that were based off of the majority of the users…If they’ve done the due diligence around having bias in machine learning all managed, that’s a good kind of litmus test for them having other good, judicious processes in place.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.