A Consumer Reports investigation found that safeguards on AI voice-cloning software used to ward off malicious actors are as secure as a house of cards.
During its study, the independent nonprofit member organization found that four (ElevenLabs, Speechify, PlayHT, and Lovo) out of the six tools it used to clone existing audio of a Consumer Reports employee required it only to check a box to confirm it had the legal rights to clone the voice, “or make a similar self-attestation,” to proceed.
The remaining two tools, Descript and Resemble AI, had slightly better guardrails, but weren’t unfailing either. Descript’s voice-cloning tool required the investigators to read and record a consent statement in order to create cloned audio; Consumer Reports noted that this could easily be bypassed by using a different tool to generate a cloned statement.
Resemble AI, on the other hand, required investigators to use audio recorded in real time to produce their first voice clone, but they were able to bypass this by playing a recording of the employees’ voice instead. The investigators noted that this method, however, did not produce a “compelling” impersonation.
Voice actors. Malicious actors have wasted no time in leveraging AI voice cloning as a new way to target victims. A 2023 McAfee global study that surveyed 7,000 people found that 25% had either experienced or knew someone who had been a victim of an AI voice-cloning scam.
Burden of responsibility. The findings bring up the question of who the onus of protection against the malicious use of AI-cloning software should fall on. Visar Berisha, an associate dean of research and commercialization at Arizona State University, told us that the answer is complicated.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
“There’s two different questions: Who should it fall on? And then, who will it fall on?” Berisha said. “I think who it will fall on is the individual consumers eventually, because I don’t know that there’s appetite to change policy, for example, at the government level.”
Hany Farid, a professor at the University of California, Berkeley, told us that in addition to consumers and AI-cloning platforms themselves, videoconferencing software companies should have some level of accountability in keeping people safe.
“Do I think they have an undue burden?” Farid said. “No, but these are pretty big companies, and they’ve got some pretty deep profit pockets, so maybe they should figure out how to protect us when we’re on calls like this.”
He added that the government should also take better precautions to protect against the growing threat.
“Our regulators have got to get their act together,” Farid said. “They’ve got to start taking this seriously.”
Future perfect. Berisha and Farid both told IT Brew that realistically, there is very little that individuals can at the moment to protect against malicious cloning. However, Berisha predicts that in the future, detecting cloned audio will be slightly easier.
“I think in the future there’ll be solutions…where if I put out a piece of media, my voice along with my video, along with it comes this authentication that it came from me directly,” Berisha said. “And as a society, I think we will evolve to only trust media that is stamped in this way.”