The Age of Illusion II: When War Becomes Content

Picture of Ronnie Huss
Ronnie Huss

The first casualty of war is truth — now AI industrialises that casualty at scale.

Key Takeaway

Synthetic media has become an active weapon system in modern conflict — AI-generated deepfakes, fabricated footage, and coordinated disinformation are now deployed at industrial scale to shape public perception, erode trust in authentic evidence, and manipulate decision-making on both sides of conflicts.

We’re watching the first wars where synthetic media isn’t just collateral damage. It’s a weapon system.


1. The Camera Is Now a Combatant

Seeing is no longer believing. In active conflict, the camera doesn’t just document war — it fights it. The footage reaching your feed isn’t journalism — it’s ammunition.

Take the deepfake video of Ukrainian President Volodymyr Zelenskyy urging his troops to surrender, circulated early in the Russia-Ukraine war.

It spread rapidly across social media, amplified by pro-Russian channels, before eventually being debunked. But the damage was done. Confusion sown. Morale targeted. The correction never travels as far as the original.

Key Takeaways

  • 1. The Camera Is Now a Combatant
  • 2. The Industrialisation of “First Casualty”
  • 3. The War You’re Watching Isn’t the War Being Fought
  • 4. The Consensus Manufacturing Machine

This wasn’t an isolated incident. Synthetic clips, misattributed footage from video games, and coordinated drops of AI-altered video have flooded platforms throughout this conflict, turning pixels into projectiles.

In one instance, AI-generated videos depicted Ukrainian soldiers apologising to Russian forces, using the faces of real streamers to fabricate despair. The hook is always the same: these fakes go viral before the facts can catch up, and it’s the doubt that does the real damage.


2. The Industrialisation of “First Casualty”

Old propaganda: state TV, leaflets, radio broadcasts — slow, traceable, at least partially deniable.

New propaganda: AI-generated at scale, deployed via algorithm, laundered through the language of “citizen journalism.”

Speed advantage: synthetic content spreads before verification is even possible. A doctored video of a hacked Ukrainian news chyron repeating surrender calls can hit Telegram in minutes.

Volume advantage: generate a thousand variants and one will stick. Russian-linked operations pumped out millions of AI-produced articles and posts, poisoning both chatbots and public discourse. Bots reportedly comprised 60–80% of pro-Russia hashtag traffic in the early phases of the war, automating the flood.

Traceability? Gone in the noise.


3. The War You’re Watching Isn’t the War Being Fought

We’ve seen this playbook before with Ukraine. Now we’re watching it in real-time with Iran.

The gap between ground truth and viral narrative has never been wider:

  • Misattributed footage from other conflicts repurposed as fresh “evidence”
  • AI-generated videos of destroyed cities that never existed
  • Video game footage — Arma 3, War Thunder — shared as genuine combat clips
  • Synthetic audio of leaders making statements they never made

Both sides do this. That’s the uncomfortable truth. This isn’t a story about heroes and villains — it’s about a system that benefits from eroding the very concept of verifiable fact.


4. The Consensus Manufacturing Machine

Fake consensus in peacetime moves markets. In wartime, it moves something far more consequential.

Coordinated inauthentic behaviour + generative AI = scalable reality distortion.

Deepfakes of U.S. officials — like a fabricated Matthew Miller video appearing to justify strikes — spread across platforms before anyone with the authority to debunk them had even seen them.

“Grassroots outrage” triggering policy decisions? Often astroturf. Bots manipulate opinion, AI poisons the well, and manufactured mobs start to feel organic to the people inside them.


5. Who Benefits From Fog?

When nothing can be verified, everything becomes deniable.

  • Atrocities become “alleged” indefinitely
  • Evidence becomes “contested” by default
  • The fog of war becomes a feature, not a bug

State actors deny strikes with counter-fakes. Non-state groups amplify the chaos for their own reasons. Permanent ambiguity becomes a shield for everyone involved.

Both sides benefit. That’s the system working exactly as its architects intended.


6. The Human Cost of Epistemological Collapse

This isn’t abstract. People die when truth is negotiable.

  • Aid decisions get stalled by footage too doubtful to act on
  • Interventions get delayed over “fake” claims that might not be fake at all
  • Asylum seekers find their real, documented stories dismissed as possible AI fabrications

Paralysis follows. Trust — the rarest resource in any conflict — evaporates, and civilians end up in the crossfire of competing illusions while institutions argue about what’s real.


7. The Verification Arms Race

The blockchain-based proof infrastructure discussed in the original Age of Illusion becomes relevant here: content credentials, witness-verified footage, chain-of-custody metadata.

Emerging tools are fighting back:

  • C2PA (Coalition for Content Provenance and Authenticity) — cryptographic signatures for media
  • Witness-verified footage — timestamped, geolocated, tamper-evident
  • Chain-of-custody metadata — who touched it, when, and what changed

But the arms race is fundamentally asymmetric: fabrication is cheap, verification is expensive.

A deepfake costs pennies. Proving it’s fake costs hours of expert analysis — hours that the news cycle simply doesn’t grant. The incentives are structurally broken, and no amount of media literacy campaigns changes that underlying economics.


Final Thought

We’re not watching wars anymore. We’re watching competing simulations of wars.

And the winner isn’t whoever has truth — it’s whoever makes their version feel true first.

Welcome back to the Age of Illusion. The stakes just got higher.


This is Part II of the Age of Illusion series. Read Part I: When AI Makes Seeing Not Believing.

Frequently Asked Questions

How is synthetic media being used in modern warfare?

Synthetic media is used to fabricate evidence of atrocities or victories that never occurred, create false attributions of statements to real leaders, generate coordinated disinformation at scale faster than fact-checkers can respond, and erode confidence in authentic footage by creating plausible deniability for real events.

What makes AI-generated disinformation harder to counter than previous propaganda?

AI reduces the cost and skill required to produce convincing fake content to near-zero, enables production at scale that overwhelms verification capacity, and creates a ‘liar’s dividend’ where even genuine footage can be dismissed as synthetic. The volume alone defeats traditional fact-checking approaches.

How can individuals and institutions defend against synthetic media?

Defence strategies include: developing media literacy and provenance verification skills, supporting cryptographic content authentication standards like C2PA, treating unverified viral content as suspect by default, and investing in AI detection tools — while accepting that no defence is currently foolproof against state-level synthetic media operations.

About the Author

Ronnie Huss is a serial founder and AI strategist based in London. He builds technology products across SaaS, AI, and blockchain. Learn more about Ronnie Huss →

Follow on X / Twitter · LinkedIn

Written by

Ronnie Huss Serial Founder & AI Strategist

Serial founder with 4 successful product launches across SaaS, AI tools, and blockchain. Based in London. Writing on AI agents, GEO, RWA tokenisation, and building AI-multiplied teams.

SearchScore AI Visibility Badge
Get your free AI, SEO & CRO audit — instant results
Audit link sent! Check your inbox.