Clarity, an Israeli AI cybersecurity startup, announced last week it had raised $16 million in seed funding led by Walden Catalyst and Bessemer Venture Partners. Founded by Michael Matias, Natalie Fridman, and Gil Avriel in 2022, Clarity provides services to news outlets, government entities, intelligence agencies, independent journalists, and companies aiming to detect deepfakes and synthetic media created by AI.
IT Brew caught up with Matias, who serves as the company’s CEO, to chat funding, deepfakes, and disinformation during the Israel–Hamas war.
On deepfakes and cybersecurity
“I think we’re really thinking of deepfakes as a form of cybersecurity virus,” he told IT Brew. “Besides that, there are very prominent strains of the virus, and then they fork and replicate pretty quickly.”
With the funding, Matias said the team plans on ramping up research efforts and digging deeper into analysis.
“Ultimately, deepfakes are getting better. We’re seeing that with OpenAI’s work. We’re seeing that with Google’s work, the open-source community,” he said. “So, we’re doubling down on our research. We’ve shown and we’ve proven that our techniques and the way we’re building our models is effective, but the stakes are accelerating faster than ever.”
On how IT leads are using the platform
CISOs, IT managers, and others in the security field can utilize Clarity within their video conference platforms and communication channels to prevent impersonation.
“We have a solution where we integrate into Zoom, [Microsoft] Teams, and [Google] Meet, and then we give real-time alerts to participants in the video call where there’s a suspicion,” said Matias. “This is now rolling out in a few different organizations.”
On the future of this industry
“I think that a year from now, practically every IT team will have a solution, like the one that Clarity provides as part of their systems. There’s no way around it. Deepfakes are advanced phishing threats. They’ve already attacked multiple companies. We know of many other instances in which they were used, but it never made it to the public side. So, I think that every IT manager and CISO, by the end of this year, will be adopting the solution.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
On the tech side of Clarity’s models
Clarity focuses on three modalities: video, audio, and images, according to Matias. “We have dozens of different, very small, fine-tuned neural networks that are looking for different pieces,” he said. “Some of them are looking at facial features, like the lips, the nose, the ears, and the eyes. Some of them are looking at the voice track—identifying whether there are anomalies in the speaker, and some of them are reverse engineering deepfake creation.”
On the impact of the Israel–Hamas war
“When we started Clarity, we anticipated there to be an inflection point for the mass distribution of deepfakes around the US elections,” Matias said. “I think the war in Gaza expedited a lot of that, particularly in images…On Oct. 7, that all went into production very quickly.”
“We work very actively with intelligence agencies and government organizations to verify the media in the context of the war,” he added. “This is media, including hostage videos, including media from the field, and the same is true for our work with large publishers and different news outlets. A lot of the media that we’re dealing with is definitely in the context of the war.”