Fake explosions, fake missiles, fake troops: AI videos and images of Iran war spread widely on social media

Fake explosions, fake missiles, fake troops: AI videos and images of Iran war spread widely on social media

Since the onset of the Iran war, social media platforms have become a battleground for misinformation, with AI-generated content fueling confusion. Unlike the earlier era of digital forgeries, which relied on basic photo manipulation or mislabeled clips, today’s deceptions are crafted with advanced artificial intelligence tools. This shift has made the falsehoods more convincing, harder to detect, and increasingly pervasive.

The Evolution of War-Time Deception

As the conflict against Iran intensifies, the spread of synthetic media has accelerated. Hany Farid, a UC Berkeley professor specializing in digital forensics, notes that the landscape has transformed dramatically. “Ten years ago, there might have been just one or two fake images circulating,” he explained. “Now, hundreds appear daily, and they’re nearly indistinguishable from real footage.”

“It’s not just realistic; it’s landing — it’s landing hard. People believe it and they’re amplifying it.”

Shayan Sardarizadeh, a BBC Verify journalist and expert in debunking war-related disinformation, highlighted the accessibility of generative AI. “In the past year, AI tools have become widely available, enabling the creation of highly convincing videos and images of significant war events,” he said. “These can deceive even those who aren’t trained to spot digital anomalies.”

A Flood of AI-Generated Fictions

Experts report that AI-created videos and images have garnered millions of views in weeks, with new content emerging faster than fact-checkers can respond. Examples include a video showing a fictional missile barrage targeting Tel Aviv, Israel, and another portraying civilians fleeing an alleged attack at an airport. One clip purports to depict U.S. special forces captured by Iranian troops, while another suggests security footage of military facilities in Iran being destroyed. Among these, three clips are AI-generated, with one being a real recording from last year.

Additionally, fake stills claim to capture dramatic scenes: a U.S. military base in Iraq ablaze, the Saudi Embassy burning, and Iranian Supreme Leader Ali Khamenei found dead under rubble. A government-linked publication even shared a fabricated satellite image of damage to a U.S. base in Bahrain. These examples illustrate the scale and variety of AI-driven disinformation in the current conflict.

Challenges to Truth in the Digital Age

Partisan divides and fragmented media ecosystems have created an environment where misinformation thrives. Social media algorithms prioritize engagement, often amplifying content from like-minded users, leaving the public susceptible to biased or false narratives. Meanwhile, platforms have scaled back their moderation efforts, allowing AI-generated content to spread unchecked.

“The content is more realistic, the volume is higher, the penetration is deeper — this is our new reality. And it’s really messy,” Farid remarked.

Despite ongoing efforts to counter these fakes, the rate of their proliferation remains alarming. While platform X recently introduced a policy suspending content creators who fail to disclose AI-generated war footage, Farid remains skeptical. He pointed out that most users aren’t part of the payment program, leaving crowdsourced fact-checking as the primary defense — a system with inconsistent reliability.