Cybersecurity

Researchers discover that audio deepfake detectors are vulnerable to adversarial attacks

Researchers from Hamad Bin Khalifa University were able to reduce audio-deepfake classifier Deep4SNet’s accuracy rate from 98.5% to 0.08% in one of the attack scenarios performed.
article cover

Anna Kim

4 min read

State-of-the-art audio deepfake detectors are no match for well-equipped bad actors, according to a recently published research study.

A trio of researchers from Hamad Bin Khalifa University have discovered that some AI-based audio authentication systems are vulnerable to adversarial attacks, a type of attack that deceives a machine learning model by manipulating its input data.

The trio, which comprises Roberto Di Pietro, Spiridon Bakiras, and Mouna Rabhi, was able to unearth the finding by engineering a set of attacks on Deep4SNet, an audio deepfake detection model with a 98.5% accuracy rate in classifying fake speech. Deep4SNet detects fake audio by converting inputted audio data into histogram images. The model, which is “highly accurate” in image classification tasks, then uses a convolutional neural network that has been trained with more than 2,000 original and fake voice recordings to classify if the audio sample is real or fake.

Fool me twice. During the study, the research team focused on their ability to perform a graybox attack, where the attacker has limited knowledge of the victim’s model, against Deep4SNet. In this case, the attacker would have access to the data used to train Deep4SNet.

From there, the researchers were able to leverage generative adversarial networks (GANs)—generative models that create new data by training a neural network that generates new data to compete against a classifier neural network that deciphers if data is real or fake—to create histogram images that would trick the audio deepfake detector in two proposed attack scenarios.

The first attack scenario uncapped how random noises could be used to deceive Deep4SNet.

“Using again the neural networks, we were able to generate a histogram, an image essentially, that is able to match very well the image stored for that user,” Bakiras told IT Brew.

Under this scenario, the researchers discovered that the accuracy rate of Deep4SNet dropped to 0.08%.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

The second attack scenario discovered by the researchers demonstrated how an attacker could chop up existing audio samples to form a phrase that could deceive the detector.

The wild, wild AI west. The findings come at a time where it only takes five minutes for a bad actor to generate cloned audio that could bypass the voice authentication system at a bank. Di Pierto told IT Brew that audio deepfakes will continue to serve as a large problem for the security industry as bad actors leverage it for fraud.

“Nowadays, it’s possible to generate fake audios of everyone,” Di Pietro said. “The problem is very general and is touching big CEOs, but also normal people.”

Di Pietro added that the current state of security against audio deepfakes is “lamentable” and that the attacks identified by his research team would be “very easy” to perform by a malicious actor.

One small step. Fortunately, the research team has proposed an easy defense mechanism to combat adversarial attacks against audio deepfake detectors like Deep4SNet. Rabhi told IT Brew that users can use a speech-to-text application programming interface in conjunction with an audio deepfake detector to further verify the authenticity of an audio sample.

“By doing so, we were able to reduce the attack success rate and achieve better detection accuracy,” Rabhi said.

While the proposed defense mechanism is simple and cost-effective, the trio highlighted the need for more research to be done in the AI-security space to further mitigate the impact of future threats.

“We saw how easy it is to bypass authentication that is based on voice, but this is not the only security concern. The capabilities of AI are tremendous,” Bakiras said. “Our work is just the first step.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

I
B