The $25 Million Deepfake Meeting (And Why You Can't Tell What's Real Anymore)
The Resistance Sabotage Manual: Day 3 of 12
August 16, 2025
The Zoom meeting looked perfect. Too perfect.
The CFO was there. The entire leadership team. All discussing an urgent acquisition that needed immediate wire transfers. The finance worker in Hong Kong did what any employee would do — followed his CFO's direct orders.
$25 million gone. February 2024.
One problem: Everyone in that meeting except him was a deepfake.
By the time investigators confirmed what happened, the money had vanished through cryptocurrency exchanges. But here's the part that should terrify you: When the real CFO appeared on TV to warn others, comments flooded in asking "How do we know THIS isn't the deepfake?"
Nobody could answer with certainty.
In Russia, they call it the "firehose of falsehood" — flooding the zone with so much contradictory information that truth becomes impossible to determine. In China, they perfected the "reverse propaganda" technique — creating obviously fake opposition content to discredit real dissent. Now America has surpassed them both. We've democratized the confusion.
This is how authoritarianism wins in 2025: Not by hiding the truth, but by making truth impossible to recognize.
The Day Reality Broke
Remember when you could trust your eyes? That ended definitively in 2024.
January: Thousands of New Hampshire voters received robocalls from "President Biden" telling Democrats not to vote in the primary. It was so convincing that even campaign staffers initially thought it was real. The creator was fined $6 million. Estimates suggest 5,000 to 25,000 voters received the calls.
April: A deepfake video of Hillary Clinton endorsing Ron DeSantis went viral across Instagram, Twitter, YouTube, and TikTok. Millions of views accumulated before fact-checkers could respond. When they debunked it, believers claimed the fact-checkers were the real conspiracy.
August: Steve Beauchamp, an 82-year-old retiree, lost $690,000 to deepfake videos of Elon Musk promoting an investment scheme. "I mean, the picture of him — it was him," Beauchamp told The New York Times. "Now, whether it was A.I. making him say the things that he was saying, I really don't know."
The New York Times dubbed deepfake "Musk" the "Internet's biggest scammer." But Musk deepfakes were just one of thousands of AI-generated scams flooding social media.
A Maryland high school principal's reputation was destroyed when a deepfake of him making racist remarks went viral, accumulating massive views within hours. Death threats poured in. He was placed on leave. Even after it was proven fake, parents still wondered: "But what if...?"
We're doing it to ourselves.
Your Brain on Information Chaos
Studies show that people struggle badly with deepfake detection. An iProov study found only 0.1% of participants could accurately distinguish all deepfakes from real content. When warned that videos might contain deepfakes, only 21.6% correctly identified them, according to Royal Society research. Most people mistakenly flag real videos as fake while missing actual deepfakes.
Carnegie Mellon's Thomas Scanlon warns: "The concern with deepfakes is how believable they can be, and how problematic it is to discern them from authentic footage."
This is "epistemic collapse" — when a population loses the ability to determine truth. And it's exactly what authoritarians need.
Why fight protesters when you can make people doubt protests are real? Why censor journalists when you can make people doubt journalism exists? Why hide atrocities when you can make people doubt their own eyes?
The genius is its efficiency. The KGB needed thousands of agents to spread dezinformatsiya. The Stasi required half of East Germany to spy on the other half. Modern authoritarians just need you to have a phone and social media accounts.
You'll do the rest yourself.
The Three Stages of Reality Collapse
Stage 1: Doubt Everything (We are here) You see a video of ICE raids, police violence, or protest crowds. Your first thought isn't "this is terrible" — it's "is this real?" You check comments for debunking. You reverse image search. You look for tells of AI generation. By the time you maybe-possibly verify it, the moment for action has passed. The outrage has cooled. The opportunity is gone.
Stage 2: Believe Nothing (Coming soon) Eventually, verification exhaustion sets in. Everything might be fake, so nothing matters. Real atrocities get the same shrug as obvious fakes. "Who knows what's true anymore?" becomes the universal response. Putin's Russia has been here since 2014.
Stage 3: Believe Only Power (The goal) When shared reality completely collapses, people default to tribal truth. Whatever your side says is real; whatever the other side says is fake. Facts become team jerseys. China achieved this with COVID origins — no Chinese citizen can publicly question the official narrative, regardless of evidence.
We're accelerating through Stage 1. Some cities are already entering Stage 2.
This Month's Manufactured Confusion
The 2024 election cycle proved how effective confusion campaigns can be:
The Robocall: When "Biden" told New Hampshire Democrats not to vote, initial reactions split three ways: those who believed it, those who suspected it was fake, and those who didn't know what to believe. The confusion was the point. Lingo Telecom paid a $1 million fine for distributing it, but the template was established.
The Financial Scams: The Hong Kong incident wasn't isolated. Deloitte found that 25.9% of executives reported their organizations experienced deepfake incidents targeting financial data in 2024. The Financial Crimes Enforcement Network issued an emergency alert about deepfake fraud schemes.
The Cheapfakes: NPR noted that while full deepfakes got attention, simpler "cheapfakes" — edited videos, out-of-context clips, manipulated audio — did most of the damage. They're easier to make, harder to definitively disprove, and spread just as fast.
Meta's own report admitted less than 1% of fact-checked misinformation during 2024 elections was AI content. But that small percentage consumed disproportionate attention and resources. The flood of potential fake content made real documentation suspect by association.
How Countries Die in the Age of Deep Fakes
Slovakia, 2023: Two days before the election, fake audio of a candidate discussing vote manipulation went viral. By the time it was debunked, the election was over. Šimečka's Progressive Slovakia party lost to the pro-Russian opposition in what observers called a close race where the deepfake may have been decisive.
India, 2024: The world's largest election saw AI-generated content from all parties. Politicians openly shared AI memes of opponents. The line between satire, propaganda, and reality dissolved completely.
United States, 2024-2025: The FEC announced they would not regulate AI in political campaigns, citing lack of authority. State-level attempts at legislation stalled. The technology advanced faster than law could follow.
As Harvard's Bruce Schneier observed: "As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics... All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation."
The Resistance Through Radical Verification
Easy Mode: The 24-Hour Rule Never share dramatic video/audio for 24 hours. Period. Real events still matter tomorrow. Fake ones won't. This simple delay kills 90% of misinformation spread.
One Houston teacher implemented this with her students: "Shares of fake content dropped from daily to near-zero. They started waiting, checking, thinking."
Medium Mode: Triangle Verification Before believing any dramatic footage, require three types of confirmation:
Technical: Check metadata, compression artifacts, AI detection tools
Contextual: Do locations, weather, clothing match claimed date/place?
Corroboration: Are multiple credible sources reporting the same event?
Carnegie Mellon's Ari Lightman notes: "It's up to the consumer to determine: What is the value of it, but also, what is their confidence in it?"
Hard Mode: Reality Anchoring Networks Build trusted verification networks with people you know physically. Create group chats specifically for confirming local events. When something happens in your city, you have five people who can say "Yes, I'm here, this is real" or "No, I'm downtown, nothing's happening."
Portland activists maintained "reality anchor" networks during 2020 protests. When fake footage emerged claiming massive violence, dozens of livestreamers could instantly show peaceful streets. The network made propaganda ineffective.
Your Information Hygiene Audit
Check your habits:
☐ Share videos within minutes of seeing them ☐ Trust content that confirms your beliefs more easily ☐ Assume dramatic footage is real if it supports your side ☐ Skip fact-checking when you're emotional ☐ Spread "awareness" without verification ☐ React to headlines without reading articles
Four or more? You're part of the problem. And that's exactly what they're counting on.
The Uncomfortable Truth
Here's what nobody wants to admit: We engage with fake content 70% more than real content, according to MIT research. Not because we can't tell the difference. Because fake content is designed to be shared. It's engineered for virality.
The deepfakes aren't trying to be perfect. They're trying to be engaging. Shareable. Emotional.
Real footage is messy, unclear, complicated. Fake footage is clean, obvious, simple. A Berkeley study found people share fake content faster precisely because it feels more "true" than truth.
Every time you share unverified content — even to "raise awareness" — you're training others to distrust everything. You're building the confusion authoritarians need.
You're sabotaging your own resistance.
The Success Story That Points the Way
When deepfake audio emerged claiming a Slovakian candidate was rigging votes, local journalists did something revolutionary: They refused to report on it until verified. For 48 hours, while social media exploded, mainstream media stayed silent.
By the time they reported, they had the full story: the fake, who made it, why, and how to spot similar fakes. Public trust in media actually increased. The candidate targeted by the deepfake won.
Silence beats amplification. Verification beats virality. Truth beats speed.
But only if we choose it.
Tomorrow: How "both sides are bad" became the most dangerous phrase in America — and why your political neutrality is anything but neutral.
The Resistance Sabotage Manual is a 12-day series examining the specific ways we accidentally collaborate with authoritarianism — and how to stop. Based on analysis of democratic collapses from Weimar Germany to present day.
Have you shared something you later learned was fake? How did it feel? What did you learn? Share your experience below — shame is how misinformation spreads, honesty is how we stop it.