In March 2026, a deepfake video of a sitting US senator went viral. It took 72 hours to definitively prove it was fake. By then, 40 million people had seen it. The correction reached 200,000.
Those numbers again, because they define the world we now live in. Forty million saw the lie. Two hundred thousand saw the truth. A ratio of 200:1. This wasn't some fringe conspiracy site — mainstream social media, shared by journalists, amplified by algorithms, treated as news for three full days.
This is not a technology problem. This is an epistemology problem. The deepest question humans can ask — "how do I know what's true?" — has been destabilized by a technology that makes truth and fiction visually, audibly, and textually indistinguishable.
We have no solution. Not a "we need more research" kind of no solution. A "we don't even have a theoretical framework for solving this" kind.
The Progression Nobody Tracked
In 2018, deepfakes were obviously fake. Weird flickering around edges. Uncanny valley faces. You could tell within seconds.
By 2020, convincing enough to fool a casual viewer but still detectable by experts.
By 2022, good enough that journalists had to verify sourcing before publishing any video of public figures.
By 2024, indistinguishable from real footage to any human viewer, requiring computational forensic analysis to detect — and even that only worked sometimes.
By 2026, we've crossed what researchers call the "verification horizon" — detecting a fake takes longer than the news cycle. By the time you prove something is false, the damage is done and the public has moved on.
From obviously fake to undetectably real in eight years. Less time than it takes to get a PhD in the subject.
Not Just Video
Everyone focuses on deepfake videos because they're the most dramatic example. But the trust collapse is hitting every information medium at once.
Voice cloning. ElevenLabs can clone any voice from 30 seconds of audio. Thirty seconds. Anyone who's ever spoken publicly — podcast guests, conference speakers, YouTubers — has given the world enough material to put any words in their mouth. A CEO's earnings call, synthesized. A politician's concession speech, fabricated. A family member asking for money, faked.
In 2025, voice-cloning scams cost Americans $25 billion. People called families pretending to be their children, spouses, parents — using AI voices indistinguishable from the real person. Your phone rings. You hear your daughter's voice: "Mom, I've been in an accident, I need you to wire money." You have no way to know it's not her.
Text generation. The entire concept of "reading something online and knowing who wrote it" is dead. AI-generated articles flood Google. AI comments dominate Reddit threads. AI reviews populate Amazon. At what point does "I read that..." become meaningless?
Image manipulation. Not just generation from scratch — modification of real images. Adding people to photos they weren't in. Removing people from photos they were in. Changing expressions, contexts, settings. A real protest photo with AI-added flames. A real politician's face with an altered expression. The assumption that photographs document reality — an assumption that held for nearly 200 years — is no longer operational.
The Epistemological Crisis
Let me be precise about what "epistemological crisis" means here. This isn't academic jargon. It's the practical collapse of shared reality.
For most of human history, we agreed on how to determine truth. We had hierarchies of evidence. Direct witness testimony. Multiple corroborating accounts. Physical evidence. Expert analysis. Published records. Photographic proof.
AI has compromised every single one.
Witness testimony can be fabricated by AI avatars in video calls. Multiple corroborating accounts can be generated by bot networks. Physical evidence can be digitally manufactured. Expert analysis can't keep pace with generation speed. Published records can be retroactively altered. Photographic proof means nothing.
What's left? What mechanism remains for a society to agree on what happened?
Not hypothetical. Happening now.
Attorneys in courtrooms are challenging video evidence as potentially AI-generated. Newsroom verification teams spend days on stories that used to take hours. Voters dismiss real footage as fake ("probably AI") and believe fake footage as real ("looks authentic"). The tools meant to fight misinformation have created something worse: a world where ALL information is suspect.
Researchers call it the "liar's dividend" — when any evidence can be dismissed as AI-generated, guilty people benefit most. "That video of me accepting the bribe? Obviously AI-generated. Prove it isn't." Proving a negative in the age of synthetic media is effectively impossible in any timeframe that matters.
The Historical Parallel
The closest parallel is the printing press.
Before Gutenberg, truth was mediated by authorities — primarily the Church. They controlled what was copied, distributed, considered canonical. Enormously restrictive, often corrupt, fundamentally anti-democratic. But it maintained a shared reality. Everyone in a community agreed on basic facts because the facts came from one source.
The printing press shattered that monopoly. Anyone could publish anything. Martin Luther's 95 Theses went viral (in 1517 terms). Multiple competing versions of truth emerged. The result? A century of religious wars. Millions dead. The eventual emergence of new truth-mediating institutions: modern science, journalism, legal systems, democratic governance.
That transition took 200 years.
AI is doing to evidence what the printing press did to authority. Shattering the monopoly — not of who can publish, but of what counts as proof. The old evidence hierarchy (photos > testimony > claims) depended on photos being expensive and hard to fake. When they're free and trivial to fake, the hierarchy collapses.
What new truth-mediating institutions emerge? And how many decades of chaos do we endure before they solidify?
The Solutions Being Tried
People are working on this. Some approaches show promise. None are sufficient.
C2PA (Coalition for Content Provenance and Authenticity). An industry consortium — Adobe, Microsoft, Intel, others — building "content credentials." Tamper-evident digital signatures tracking an image or video from capture device to publication. Think blockchain-verified chain of custody for media.
The problem: it's opt-in. Requires hardware manufacturers to embed credentials at capture, software makers to preserve them, platforms to display them. Any break in the chain destroys provenance. And it doesn't help with anything created before the system existed — meaning essentially all historical media.
Blockchain-based verification. Startups using blockchain to timestamp and authenticate media at creation. If you can prove an image was captured at a specific time and place by a specific device, you can authenticate it.
The problem: solves authentication for future media, not the generation problem. An AI-generated image won't have blockchain verification — but absence of verification doesn't prove fakeness. Just proves the creator didn't use the system.
Detection AI. Companies building AI that spots AI-generated content. A technological arms race — generation versus detection, getting better in lockstep.
The problem: detection is fundamentally harder than generation. The generator only needs to fool the detector once. The detector needs to catch every fake, every time, across every technique. Asymmetric war. Defense is losing.
"Proof of humanity" systems. Biometric verification, hardware-backed attestation, captchas on steroids.
The problem: proving a human created content doesn't prove the content is true. Humans lie. Humans can frame deepfakes as captured footage. And requiring proof-of-humanity for all online communication creates surveillance infrastructure with its own massive risks.
What This Means for Everything
Second-order effects. This is where it gets truly frightening.
Democracy. Campaigns will operate in an environment where any candidate can be depicted saying or doing anything, with verification arriving too late to matter. Opposition research becomes "generate a plausible fake and release it on Friday before the Tuesday election."
Courts. The legal system has relied on evidence since its inception. When all digital evidence is suspect, proceedings slow dramatically, costs multiply, and outcomes become less reliable. The wealthy get better justice through expensive forensic analysis. Everyone else drowns in uncertainty.
Journalism. A fake takes seconds to create and minutes to go viral. Verification takes days and costs real money. News organizations are already understaffed. Adding verification burdens to every piece of media is economically unsustainable at social media speed.
Personal relationships. "I saw a video of you" destroys trust regardless of whether the video is real. The mere possibility of fabrication poisons intimate relationships. When anyone can generate synthetic intimate imagery of anyone else from a handful of social media photos, the weapon becomes universal.
Financial markets. Fake CEO statements, fabricated earnings calls, synthetic analyst reports. Markets move on information. When information can be manufactured at zero cost, market manipulation becomes trivial and real-time detection impossible.
The Fundamental Shift
Here's what I keep coming back to. We're moving — rapidly and irreversibly — from a high-trust to a low-trust information environment.
For the past century, the default when encountering media was trust. Photo in the newspaper? Believed it. Audio of a phone call? Accepted it. News article? Assumed a journalist wrote it.
The default is now suspicion. And suspicion is corrosive in ways that go far beyond individual misinformation incidents.
In a low-trust environment, people retreat to tribal knowledge. Believe their in-group, distrust everything else. Shared reality fractures into competing narratives with no mechanism for resolution. Consensus becomes impossible because the evidentiary foundation has been destroyed.
This isn't unprecedented. Russia has operated in a low-trust media environment for decades — citizens assume everything is propaganda and resort to cynicism as survival strategy. "All sides lie, so I'll believe whichever lie benefits me."
That's the endpoint. Not a world where everyone is deceived. A world where everyone ASSUMES deception and acts accordingly. Where trust itself becomes a vulnerability. Where "is this real?" has no reliable answer, so people stop asking.
What You Can Do (Honestly, Not Much)
I refuse to end with "10 tips to protect yourself from the epistemological collapse of civilization." There are no easy answers. But some habits help.
Slow down. The 72-hour rule: something shocking appears online, give it 72 hours before forming a strong opinion. Most fakes are identified within that window.
Triangulate. Don't trust any single source for anything important. Multiple independent confirmations. Emphasis on independent — bot networks create illusions of consensus.
Invest in relationships with trusted sources. When you can't trust media generally, fall back on specific people whose judgment you've verified over time. Cultivate those relationships.
Accept uncertainty. Hardest one. We evolved to want certainty. To categorize things as true or false. The AI age requires comfort with "I don't know" and "I can't verify this." Deeply uncomfortable. Also necessary.
Demand better systems. Support content provenance standards. Pressure platforms to implement verification. Push for media literacy education. Insufficient solutions — but insufficient beats nothing.
Where This Ends
I genuinely don't know. Nobody does.
Optimistic scenario: new truth-mediating institutions emerge. Content provenance becomes standard. A new social contract around verification develops, like scientific peer review eventually standardized after the printing press chaos.
Pessimistic scenario: shared reality fractures permanently. Society reorganizes around competing belief systems with no resolution mechanism. Democratic governance — which requires agreed-upon facts — becomes impossible. Authoritarianism thrives because "strong leaders" offer certainty in a world drained of it.
Probable scenario: something in between. Messy, uneven. Financial markets might solve verification quickly because money is at stake. Public discourse might take decades because nobody profits from truth the way they profit from engagement.
The world where you could believe your own eyes is over. The world where a photograph was proof is over. "Seeing is believing" is now nothing more than a nostalgic phrase.
We're all residents of a world where any evidence can be fabricated, any proof questioned, and the cost of deception has fallen to zero while the cost of verification has risen to infinity.
Good luck to all of us.
---
Keep Reading
- The AI Loneliness Paradox — when you can't trust information OR relationships, what's left?
- AI Is Killing the Real Estate Industry — deepfake property listings are just the start
- ChatGPT vs Claude vs Gemini — the tools generating this content, compared honestly
- MCP Servers as a Business Opportunity — the infrastructure layer that could help rebuild trust



