How AI Turned Your Neighbor Into a War Correspondent – AI LIVE NEWS

0
2

Dubai’s Burj Khalifa, the tallest building in the world, engulfed in flames. Missiles are flying. People are screaming. The end is clearly nigh. You feel your heart rate spike, your palms get sweaty, and you briefly consider calling your mother to tell her you love her.

Then you notice something weird. The flames look a little too perfect. The people running away seem to be copy-pasted. And is that a watermark in the corner that says “Generated by AI”?

Congratulations. You’ve just been duped by the latest, greatest, and most absurd development in the US-Iran war: the rise of the AI war profiteer.

The New Face of War Journalism (He Lives in Pakistan and Has 31 Phones)

Let’s set the scene. It’s March 2026, and the United States and Israel have been launching strikes on Iran since February 28 . Iran has responded with drone and missile attacks on Israel and multiple Gulf nations . It’s a genuinely tense, dangerous, and complicated geopolitical situation that actual experts are struggling to understand.

Meanwhile, in Pakistan, some guy is sitting in a room with 31 phones, drinking chai, and producing more “war coverage” than CNN, BBC, and Al Jazeera combined.

According to Nikita Bier, the head of product development at X, the platform discovered a Pakistani user who had managed to hack or create 31 separate accounts posting AI-generated war videos . All of them had their usernames changed on February 27 to things like “Iran War Monitor” or some variation thereof . It was a one-man propaganda machine, churning out fake footage of a conflict he was thousands of miles away from, all while presumably wearing pajamas.

Bier’s assessment of the situation was delightfully blunt: “It’s just broke people trying to scalp creator rev[enue] share and jumping on any relevant trend (sic)” . Not ideology. Not political conviction. Not a deep-seated desire to shape global narratives. Just broke people trying to make a buck.

The Money Machine: How Fake Wars Pay Real Bills

Here’s where the economics get interesting. X runs something called the Creator Revenue Sharing programme. The rules are simple: get a Premium subscription, rack up five million organic impressions in three months, and suddenly you’re eligible for payments based on how many eyeballs your content attracts . According to estimates, X pays about eight to twelve dollars per million verified user impressions .

Now do the math on a video that shows the Burj Khalifa exploding. One of those fake videos was viewed tens of millions of times . That’s not just viral. That’s a retirement fund.

Timothy Graham, a digital media expert at the Queensland University of Technology, described the situation with the enthusiasm of someone watching a train wreck in slow motion: “Once you’re in, viral AI-generated content is basically a money printer. They’ve built the ultimate misinformation enterprise” .

Think about that. The same technology that can generate a video of your cat riding a Roomba while wearing a tiny hat can now generate a video of an international crisis. And it pays better.

The Greatest Hits of AI War Cinema

Let’s take a moment to appreciate the sheer creativity on display. If you’re going to fake a war, you might as well do it with style. The AI-generated content from this conflict has been nothing short of spectacular.

The Burj Khalifa Inferno: A video showing Dubai’s iconic skyscraper completely engulfed in flames, with crowds fleeing in terror . Dramatic. Terrifying. And completely fake. The video spread during a time of genuine anxiety among residents and tourists about actual drone and missile strikes in the region, which is either brilliant marketing or deeply unethical, depending on your perspective.

The Tel Aviv Missile Show: Another AI-generated gem showed missiles raining down on Tel Aviv, explosions rocking the city . This particular masterpiece appeared in more than 300 separate posts and was shared tens of thousands of times . Some X users, bless their hearts, asked the platform’s AI chatbot Grok to verify if the footage was real. Grok, in a moment of AI-on-AI solidarity, insisted it was authentic . AI-generated video, verified by AI hallucination. It’s turtles all the way down.

The Weeping Soldiers: One particularly emotional clip showed Israeli soldiers crying in fear, supposedly at an Iranian strike . It racked up 1.4 million views before anyone noticed that real soldiers probably don’t cry in perfectly lit, dramatically framed, slow-motion shots.

The Fake Satellite Imagery: This one’s for the connoisseurs. After authentic videos showed Iranian strikes on the US Navy’s Fifth Fleet headquarters in Bahrain, a fake satellite image began circulating, claiming to show extensive damage . The image was allegedly generated with a Google AI tool and was based on real satellite imagery from February 2025 . The giveaway? Three parked cars appeared in exactly the same spots in both images, despite the photos supposedly being taken a year apart. Even AI forgets to move the cars.

The Scale of the Problem (It’s Bigger Than Your Average Meme)

This isn’t a handful of weird videos circulating among conspiracy theorists. BBC Verify’s analysis found that AI-generated content about the conflict has collectively amassed hundreds of millions of views . NewsGuard, a misinformation watchdog, documented posts that had already garnered at least 21.9 million views across X in just the first few days of the conflict .

One video claiming to show Iranian rockets pursuing and shooting down a US jet was viewed 70 million times . A single fabricated image of an aircraft carrier sinking was shared by a Kenyan parliamentary member and viewed more than 6 million times before anyone pointed out it was actually footage from a 2006 naval exercise .

Henry Ajder, a generative AI expert, put it in perspective: “The number of different tools that are now available to create a wide range of highly realistic AI manipulations is unprecedented. We have never seen these tools so available, so easy and so cheap to use” .

Translation: Your 14-year-old nephew with a laptop and too much free time now has Hollywood-level production capabilities.

X’s Response: “Okay, Fine, We’ll Do Something”

Even Elon Musk, who has spent the last few years dismantling content moderation systems like a kid taking apart a clock, eventually noticed that maybe, just maybe, letting people get paid for fake war videos was a bad look.

On March 3, X announced a policy change: users who post AI-generated videos of armed conflicts without labeling them will be suspended from the Creator Revenue Sharing programme for 90 days . Repeat offenders get permanently banned .

Nikita Bier, announcing the change, struck a tone of wounded sincerity: “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people” .

It’s worth noting that this policy applies specifically to war videos. Political misinformation, economic falsehoods, and general nonsense are apparently still fair game . But hey, progress.

The Trump administration even praised the move. Sarah Rogers, the under secretary of state for public diplomacy, said approvingly: “You don’t need a Ministry of Truth to incentivize truth online” . Which is either profound or terrifying, depending on how you feel about private companies deciding what truth is.

The Bigger Picture: Welcome to the Future

Here’s the thing that keeps experts up at night: this is just the beginning. Victoire Rio, executive director of the technology policy non-profit What To Fix, pointed out that the pipeline onto social media can now be almost fully automated . AI generates the content. AI posts it. AI engages with it. AI monetizes it. Humans just collect the checks.

Steve Nowottny, editor at the UK fact-checking organization Full Fact, described the situation with the weary resignation of someone who’s seen too much: “Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale – and the sheer volume of this fake content and the ease with which it is generated and spreads is a real concern” .

The deeper problem, as Timothy Graham pointed out, is structural: “engagement-driven monetisation and accurate information are fundamentally in tension, and no platform has fully resolved that tension or perhaps ever will” .

In other words, as long as fake content pays better than real content, we’re going to get a lot of fake content.

How to Spot the Fakes (Before You Panic)

Since the platforms are playing catch-up and the AI tools keep getting better, here’s a quick guide to not being fooled:

  • Look for weird textures. AI often gives things an airbrushed, slightly too-perfect quality .
  • Check the shadows. If they’re pointing in different directions or look unnatural, something’s off .
  • Watch for physical inconsistencies. Extra fingers, strangely shaped objects, cars that appear and disappear .
  • Note the duration. Many AI videos are suspiciously short .
  • Check the source. Is this account three days old with 31 followers and suddenly breaking major news? Probably not.
  • Wait for confirmation. If it’s real, legitimate news outlets will eventually cover it. If it disappears after a few hours, it was probably fake.

The Bottom Line: War Has Never Been More Entertaining (Or Less Reliable)

So here we are in March 2026, watching a real war unfold alongside a fake war generated by people who’ve figured out that tragedy pays. The same technology that can create stunning works of art can also create stunning works of propaganda. The same platforms that connect us to the world also connect us to people who want to exploit our emotions for profit.

Mahsa Alimardani, a researcher specializing in Iran at the Oxford Internet Institute, summed up the damage: “Fake videos like these have a detrimental impact on people’s trust in the verified information they see online and make it much harder to document real evidence” .

When everything could be fake, nothing feels real. And when nothing feels real, the actual victims of actual wars become just another piece of content in an endless scroll.

But hey, at least the guy in Pakistan with 31 phones is making a living. Small victories.

How to spot AI-generated content:

  • Unnatural shadows or lighting
  • Physical inconsistencies (extra fingers, strange textures)
  • Very short video durations
  • Accounts with no history suddenly breaking “major news”
  • Overly dramatic scenes that look too perfect
  • Watermarks from AI generators (sometimes visible)
  • Multiple cars or objects in exactly the same positions across different “time periods”

As Musk himself predicted back in October, “Most of what people consume in five or six years – maybe sooner than that – will be just AI-generated content” . He just didn’t mention that half of it would be about wars that aren’t actually happening, created by people who aren’t actually there, viewed by people who can’t tell the difference anymore.

Welcome to the future. It’s weird, it’s profitable, and it’s coming to a screen near you. Probably generated by AI.

THIS ARTICLE IS WRITTEN BY HUMAN INTEL – JOHN DENVER

LEAVE A REPLY

Please enter your comment!
Please enter your name here