A surge in deepfake-driven scams pushed Google to remove billions of ads and suspend millions of accounts in its toughest enforcement year.
Google removed 5.1 billion ads and suspended more than 39.2 million advertiser accounts in 2024, according to its newly released 2024 Ads Safety Report. The crackdown reflects a rise in AI-powered scams, especially deepfake impersonations of public figures used in fraudulent ad campaigns.
The company attributes the spike in enforcement to threats driven by generative AI, which scammers are using to create convincing and dangerous ad content, particularly for investment and cryptocurrency scams.
One of the most serious trends noted in the report is the use of AI-generated deepfake videos that impersonate celebrities and influencers to build credibility for scams. To counter this, Google formed a task force of more than 100 experts to identify patterns and adjust its ad policies in real-time.
One of the biggest policy updates was a revision to Google’s Misrepresentation policy, specifically targeting AI-driven impersonation schemes. As a result, over 700,000 advertiser accounts were permanently suspended for such violations, leading to a 90% drop in related scam reports throughout the year.
Google also discussed how AI helped on the defense side, too, adding over 50 enhancements to its machine-learning models to detect malicious ad behavior faster and more accurately, even before users reported it.
Across various ad categories, the company took additional actions:
Publisher enforcement also ramped up, with violations issued to 1.3 billion web pages and 220,000 publisher sites.
“To fight back, we quickly assembled a dedicated team of over 100 experts to analyze these scams and develop effective countermeasures,” Google stated in the report. “While we are encouraged by this progress, we continue to work to prevent these scams.”
Scammers aren’t just getting smarter, they’re getting faster, and Google’s scramble to shut down billions of ads shows how quickly things can spiral. Deepfakes and AI scams aren’t a future problem; they’re already flooding ad platforms. Google’s massive takedown in 2024 isn’t just a cleanup, it’s a sign the rules are being rewritten as the threats come in.
Scammers are increasingly using AI to create deepfake videos and fake endorsements for investment schemes, crypto fraud, and fake giveaways, often impersonating public figures.
Users can click the three-dot menu next to an ad and select “Report ad,” or use Google’s Ad Feedback Form to flag misleading or harmful content.
Verified advertisers can appeal impersonation or misrepresentation claims through Google’s advertiser identity verification process and dedicated support channels.
Google requires identity verification for certain ad categories and uses document checks, business registration data, and, in some cases, video calls to confirm authenticity.
Yes, platforms like Facebook, TikTok, and YouTube have also reported sharp increases in AI-generated scams, prompting broader industry efforts to revise ad policies and detection tools.