OpenAI is starting to check IDs at the door, as developers access the company’s AI models through its application programming interfaces (APIs).
An OpenAI support page, updated this month, revealed an API-verification option for organizations, “to mitigate unsafe use of AI.” The feature, according to two analysts who spoke with IT Brew, represents a “first step” in protecting the model.
“AI is so difficult to predict, to monitor, to make reliable, to keep secure, to protect, that if you think putting a front-door lock on your API is enough, then you’re getting a false sense of security,” Avivah Litan, VP and distinguished analyst at Gartner, said, characterizing the verification also as a “baby step.”
According to OpenAI’s page, the verification requires a government-issued ID from supported countries, and each ID can only verify one organization every 90 days.
“Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies,” the page reads. “We’re adding the verification process to mitigate unsafe use of AI while continuing to make more advanced AI capabilities available to the broader developer community.”
While OpenAI did not respond to IT Brew about the specific threats driving an API verification option, the company, which just released its newest reasoning models, shared research this year on malicious uses of the model, including for surveillance support and romance-baiting scams.
Bloomberg reported in January that Microsoft security researchers observed individuals believed to be associated with AI model DeepSeek exfiltrating a large amount of data using the OpenAI API.
Sean McHale, partner at tech consultancy West Monroe, sees the verification feature as a way to track threat actors potentially using large language models for nefarious purposes like spreading disinformation through model-made deepfakes.
“I think that this is just a simple, straightforward way of providing some form of regulation without creating a completely new system of checks and records,” McHale said, comparing the API verification to cryptocurrency’s “know your customer” requirements for preventing money laundering.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
OpenAI shares production and security best practices, which emphasize tactics like “knowing your customer” and constraining user input. Permissions can be set for their API, according to a support page.
Microsoft, which offers its own Azure OpenAI services, allows customers to set access permissions. (Google, too, offers admin controls for its Gemini app.) “OpenAI’s ID-based verification is simpler but less enterprise-centric, and focuses on individual organization verification—not individual user verification,” Litan wrote in a follow-up email to IT Brew. Microsoft’s Entra ID-based authentication integrates with enterprise identity systems, she noted.
For Litan, today’s cyber threats related to AI access and misuse require user profiling, behavioral analytics, and detection of abnormal access from a given account.
The FBI warned the public in December of an increase in fraudsters using AI-generated text, images, video, and audio.
“If you think about all the breaches that happen today in the real world, you know user-account verification is just not enough. The criminals will get into a machine that’s logging in and just hijack the credential,” Litan said, adding that enterprises must buy that technology to look for anomalies, possibly through emerging tools in the category of AI trust, risk, and security management (AI TRiSM).
“They need to look at every interaction into the OpenAI model and everything coming back, and monitor it for acceptable use, for criminal behavior, for IP violations, for other compliance needs. So, there’s a lot of work that they need to do. They can’t rely on OpenAI to do that.”