Skip to content

How to Spot AI Propaganda: Iran’s Trump Videos Explained

Iran's AI-generated videos mocking Trump just went viral. Here's how to analyze propaganda deepfakes yourself - detection tools, visual clues, and what makes this war different.

10 min readBeginner

Iran just dropped a Lego-style AI video mocking Trump and Netanyahu, complete with AI-generated rap music and a bruised Trump minifigure holding a sign that says “I’m a loser.” The video went viral in hours. Part of a wave of AI propaganda flooding social media right now – and according to NPR reporting from March 26, 2026, this marks the first major conflict where state actors are using AI-generated content to directly target enemy populations at scale.

You’ll learn how to analyze these videos yourself, which free tools actually work, and the three detection methods that catch what your eyes miss. We’re reverse-engineering real propaganda circulating today – not rehashing generic deepfake theory.

Why Iran’s AI Videos Flip the Script

Most tutorials teach you to spot hyper-realistic face swaps – the kind meant to fool you into thinking it’s 100% real. Iran’s approach? Complete opposite.

Volume over precision. That’s Iran’s play – according to NPR’s March 26 analysis, their recent output “appears to prioritize volume over precision, with waves of content that feel rapidly produced and designed for maximum spread.” The Lego video, the interrogation deepfake showing a sweating shirtless Trump hooked to a lie detector, the Teletubby Trump in the Oval Office – none try to pass as authentic footage. Stylized, absurd, meme-ready on purpose.

Think about what that means for detection. The tells you’ve been taught – check for unnatural blinking, look for skin texture issues – don’t apply here. Lego figures don’t blink. Cartoon Trump doesn’t have pores.

This is propaganda as attention warfare. Emerson Brooking of the Atlantic Council’s Digital Forensic Research Lab told NPR something striking: “Americans are not used to seeing messages from a country the U.S. is bombing that are directed at them. This is quite new.” The videos work because they’re weird, shareable, and tap into existing political fractures.

3-Step Verification Process

You see a viral video claiming to show Trump, an explosion, or a political figure saying something inflammatory. How to verify – starting with the fastest checks, moving to deeper analysis.

Step 1: Context Check (30 seconds)

Before you even analyze the video itself, ask: where’d this come from?

Three things to verify:

  • Source verification: Official account, or random account created June 2025? (Snopes confirmed the Iran Lego video came from an account created June 2025, based in Iran, with possible ties to Tasnim News Agency.)
  • Reverse search the thumbnail: Right-click the video thumbnail, choose “Search image with Google” or paste into TinEye. If it’s been circulating for weeks under different claims, red flag.
  • Check trusted outlets: Did Reuters, AP, or BBC cover this? Major event with only sketchy accounts posting? Pause.

The Iran videos didn’t hide their origin – they wanted attribution because propaganda works when you know who made it. Most deepfake scams will obscure the source. Can’t find verified origin in 30 seconds? Treat as suspect.

Step 2: Visual Inspection (2 minutes)

Scrub through frame by frame. Your browser’s video player usually lets you use arrow keys or spacebar to step through slowly. Watch for:

Tell What to look for Example from Iran videos
Object morphing Static objects (signs, clothing, backgrounds) change shape between frames Lego video shows blocky figures with hands that flicker – one frame shows a bruised purple hand, details shift slightly
Lip-sync drift Mouth movements don’t match audio timing Interrogation video has Trump repeating phrases – watch if mouth closes before word ends
Unnatural motion Movement too smooth or jittery, lacks natural physics AI-generated missiles in Lego video have overly perfect arcs – real debris doesn’t move that cleanly
Lighting inconsistencies Shadows point different directions, faces too bright/dark for the scene Shirtless Trump interrogation scene has flat lighting – no shadow cast by lie detector wires
Edge warping Blurred or flickering edges around faces, especially near hair or glasses Non-photorealistic styles hide this, but check any “realistic” deepfake Iran posts

The catch: social media compression makes visual inspection harder. Iran posts to X/Twitter or Telegram – platforms compress files hard. That compression erases subtle artifacts and strips metadata. Academic research on deepfake detection confirms “highly compressed media commonly found on social networks” significantly reduces detection accuracy. You’re not looking at the original – you’re looking at a degraded copy.

Pro tip: Download the highest-quality version available if you can. On Twitter/X, tools like “twitter video downloader” browser extensions grab slightly better files than the in-browser stream. On Telegram, videos are often less compressed than other platforms.

Step 3: Run It Through Detection Tools (5 minutes)

Free tools won’t catch everything, but they’ll flag obvious fakes. Three you can use right now, no signup:

  1. TrueMedia.org – Upload video, get probability score. GIJN’s reporter guide says it analyzes audio, images, video for deepfake signals. Works on URLs (paste YouTube/Twitter link) or file uploads.
  2. Deepware Scanner – Video deepfakes only. Upload file, wait for frame-by-frame analysis. Slower but digs into facial manipulation patterns.
  3. AI Video Detector (browser-based) – Examines metadata, motion patterns, compression artifacts. The tool’s own docs admit 75-85% accuracy on common deepfakes, warns “advanced deepfakes created with state-of-the-art techniques may evade detection.” State actors like Iran? Probably in that advanced category.

Reality check: these tools train on public datasets. Iran’s propaganda team? Probably using newer generative models not in those datasets. Tool says “probably real” but your visual inspection found morphing objects and weird shadows? Trust your eyes.

The Lego video is stylized AI animation, not photorealistic deepfake. Detection tools trained on face-swap datasets might not flag it at all – no human face to analyze. You’re on your own with visual inspection for cartoon-style propaganda.

The Tells You Won’t Find

Most tutorials say: check eye blinks, skin texture, facial symmetry. Works for Zoom deepfakes. Useless here.

Iran’s Lego video has no skin. Teletubby Trump has no realistic eyes. Even the “realistic” interrogation deepfake is so heavily stylized (shirtless, sweating, cartoonish lie detector) that it’s obviously synthetic.

The goal isn’t to fool forensic analysis. The goal is virality.

Whitney Phillips, media ethics professor at University of Oregon, explained to NPR: “This is the language in which Trump speaks, and so this is the language in which world leaders are speaking to him.” Trump popularized meme warfare in politics. Iran studied it and fired back.

What to look for in stylized AI propaganda?

  • Temporal coherence: Objects stay consistent across frames? AI-generated animation – background details morph. A door shifts position, text flickers. Scrub slowly, watch static objects.
  • Audio artifacts: AI-generated rap (like in Lego video) often has unnatural rhythm or pitch. Does voice have breathing pauses? Do words blend unnaturally?
  • Message over realism: Video prioritizes clear political message, doesn’t care about looking 100% real? Likely intentional propaganda. Real deepfake scams (CEO fraud, celebrity endorsements) need realism. Propaganda just needs attention.

Why Detection Is Losing

What guides don’t admit: detection is losing the arms race.

Iran’s reportedly producing these videos in waves – multiple per day. Not spending weeks perfecting one hyper-realistic deepfake. Cranking out “good enough” fakes designed to spread fast, get screenshot, live as memes even after they’re debunked.

Free tools struggle – three reasons:

  • Trained on high-quality datasets where realism is the goal. Iran’s videos don’t fit that profile.
  • Social media compression strips signals tools rely on (metadata, subtle pixel artifacts).
  • Volume overwhelms manual review. Spot one fake? Ten more drop while you’re analyzing the first.

This is why MIT Media Lab’s research on deepfake detection emphasizes building intuition through practice. Tools help, but they’re not magic. You need to train your eye.

What This Means Beyond Politics

You might not care about Iran-Trump stuff. Fair. But these techniques hit scammers’ hands next.

The strategy – rapid-fire AI content, stylized to avoid detection heuristics, designed for social virality – works for political propaganda and fraud. Expect:

Fake celebrity endorsements in cartoon/Lego style (harder to detect than photorealistic deepfakes). AI-generated “news” videos with intentionally crude animation (people share it anyway because it confirms biases). Manipulated audio that’s obviously synthetic but spreads because message matters more than medium.

Iran proves deepfakes don’t need perfection – just speed, shareability, emotional charge.

Your defense? Slow down. The 30-second context check from Step 1 stops most propaganda and scams. Can’t verify source? Don’t share. Feels designed to make you angry or scared? That’s intentional – pause before you react.

Detection tools help. Visual inspection matters. But the best filter? Your willingness to fact-check before you hit retweet.

Frequently Asked Questions

Can I trust AI detection tools if they only have 75-85% accuracy?

Use them as one signal. That 75-85% accuracy? Only for common deepfakes – per AI Video Detector docs. State actors use newer stuff. Combine tool results with visual inspection and context checks. All three raise flags? Almost certainly fake. Tool says “real” but you see morphing objects or impossible lighting? Trust your eyes.

How do I analyze videos posted directly to Twitter/X or TikTok?

Most detectors take URLs. Paste the Twitter link into TrueMedia.org or Overchat AI detector – they support YouTube, TikTok, Twitter, Instagram links. Challenge: compression. The version you analyze is degraded, subtle artifacts harder to spot. Download if possible (browser extensions for Twitter, TikTok downloaders) to get slightly higher-quality file. Even compressed videos show obvious tells like object morphing and lip-sync drift if you scrub frame by frame. One trick: pause at random moments. AI-generated motion often looks perfect in real-time but unnatural when frozen – a hand mid-gesture might have impossible finger positions, or a background object might be half-morphed into something else.

What if the video is intentionally cartoony like Iran’s Lego propaganda – can tools even detect that?

Not reliably. Tools train on photorealistic faces. Lego figures? Cartoon characters? Different game. Most detection algorithms look for facial landmarks (eyes, nose, mouth edges) and analyze how those landmarks move – DeepFake detection literature focuses on “facial transformations” and “high-end deepfake manipulations.” No face means those heuristics don’t trigger. For non-photorealistic AI propaganda, rely on visual inspection: objects morphing between frames, unnatural motion (too smooth or jittery), audio artifacts. The fact it’s obviously synthetic doesn’t mean harmless – stylized propaganda spreads precisely because it’s meme-ready and doesn’t try to fool forensic tools. Actually, there’s something interesting here: the Iran Lego video is harder to debunk than a photorealistic deepfake because you can’t point to “this face looks wrong.” The wrongness is in the physics – a Lego brick rotates in a way real Lego can’t, or the camera angle shifts without the scene changing perspective correctly. Turns out cartoon-style fakes require you to understand 3D animation principles, not just “does this person’s skin look real.”

Your Next Step

Bookmark TrueMedia.org and practice on viral videos you see this week. Pick three claims circulating on social media – political, celebrity, whatever – and run them through the 3-step process: context check, visual inspection, tool analysis. You’ll build intuition fast. The Coalition for Content Provenance and Authenticity (C2PA), led by Microsoft, Adobe, OpenAI, Google, and Meta, is working on content labeling standards, but until those are universal, your skepticism is the best defense.