HIPAA Times news | Concise, reliable news and insights on HIPAA compliance and regulations

FBI warns of deepfake voices impersonating US officials

Written by Farah Amod | May 31, 2025 6:00:33 PM

The FBI is warning of a growing AI-powered scam where deepfake voices impersonate U.S. officials to trick victims into revealing sensitive government and financial credentials.

 

What happened

The FBI has issued a warning about a sophisticated fraud campaign using deepfaked voices of senior U.S. government officials. Active since April, the scam primarily targets current and former officials in an attempt to steal login credentials and access government and financial systems. Criminals are using AI-generated voice calls (vishing) and text messages (smishing) to gain victims’ trust before moving conversations to unspecified messaging platforms.

 

Going deeper

According to the FBI, the attackers create convincing audio messages by cloning voices from publicly available samples. These AI-generated messages impersonate high-ranking officials and attempt to build rapport with targets. While the agency hasn’t named the officials being imitated, it warns that the goal is to compromise official accounts and harvest sensitive data, including financial details.

Victims are urged not to trust incoming messages, even if they appear to come from recognizable names. Instead, the FBI advises calling back using official departmental numbers. Indicators of deepfake audio can include odd phrases or speech patterns that don’t match the person’s usual tone or vocabulary.

Deepfakes have been used in cybercrime for several years, but the lowering cost and rising quality of AI tools have made them increasingly accessible. In this case, attackers are relying on pre-recorded or AI-generated scripts rather than real-time interaction, which remains more costly and technically complex.

 

What was said

“AI-generated content has advanced to the point that it is often difficult to identify,” the FBI said in its bulletin. “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”

Security experts echoed the concern. Chester Wisniewski, global field CISO at Sophos, said the cost of creating convincing real-time deepfakes is still high, possibly in the tens of millions of dollars. While video deepfakes with live interactivity are not yet common, audio-based scams have already reshaped fraud.

 

The big picture

As generative AI becomes more accessible, deepfake-enabled fraud is changing from a niche concern into a national security threat. The ability to impersonate trusted voices opens new doors for espionage, data breaches, and financial theft, especially when trust is manipulated through familiar identities. The FBI’s warning is a reminder that as AI capabilities advance, so must our vigilance in verifying who or what is really on the other end of the line.

 

FAQs

How can I tell if a voice message is a deepfake?

Listen for unnatural pauses, inconsistent tone, robotic intonation, or phrases that seem out of character. Deepfakes may also lack background noise or emotional nuance.

 

What should I do if I receive a suspicious message claiming to be from a government official?

Do not respond or click on any links. Instead, independently verify the source by contacting the agency directly through official channels.

 

Are private citizens at risk or just government officials?

While the current campaign targets officials, the same techniques can easily be adapted to target businesses and the general public, especially those in high-trust roles.

 

What technologies are used to create deepfake voices?

Attackers use machine learning models trained on public audio samples such as speeches, interviews, or podcasts to synthesize highly realistic speech.

 

Can mobile carriers or apps block deepfake calls and messages?

Some advanced spam filters can detect spoofed numbers or known scam patterns, but current technology struggles to detect deepfakes in real-time without user intervention.