Disinformation Campaign Intensifies Amid Critical Battle for Pokrovsk
While Ukrainian forces battle to maintain control of the strategic eastern city of Pokrovsk, a parallel Russian offensive is unfolding across social media platforms. Dozens of AI-generated videos depicting Ukrainian soldiers surrendering weapons or weeping en route to the frontlines have gone viral in November, accumulating millions of views and providing manufactured “evidence” for pro-Russian narratives of a collapsing Ukrainian army.
Manufactured Reality Meets Actual Battlefield Pressures
Ukrainian President Volodymyr Zelensky has acknowledged the “complicated” situation in Pokrovsk, where outnumbered and outgunned Ukrainian troops struggle to prevent Russian forces from capturing the logistical hub that Moscow has targeted for over a year. The AI videos appear designed to exploit this genuine military pressure, presenting a distorted version of events to domestic and international audiences.
“They fit into the broader narrative we’ve seen since the invasion began, that President Zelensky is forcing young and old men to the front because the army can’t cope,” explained Pablo Maristany de las Casas, an analyst of pro-Kremlin propaganda at the Institute for Strategic Dialogue.
Technical Flaws Reveal AI Origins
The fabricated content contains telltale signs of AI generation that researchers say remain typical, though increasingly difficult to detect visually. One video shows a man claiming to “leave Pokrovsk” walking effortlessly despite a leg cast, while a stretcher appears to levitate and disembodied legs materialize among soldiers identified by crudely rendered Ukrainian flags.
Another series of fake videos, some bearing the logo of OpenAI’s Sora video creation tool, feature soldiers in Ukrainian uniforms crying and begging not to be sent to the front. Investigations revealed the faces actually belong to Russian streamers, with exiled content creator Alexei Goubanov confirming his likeness was used without consent, stating these fakes “serve Russian propaganda.”
Platform Response and Continuing Spread
TikTok told AFP it removed accounts responsible for publishing the videos, though not before one accumulated over 300,000 likes and several million views. OpenAI confirmed investigating the matter but provided no details. Despite these actions, the videos continue circulating on Instagram, Telegram, Facebook, X, and appear in Greek, Romanian, Bulgarian, Czech, Polish, and French publications, as well as on a Russian weekly’s website and in a Serbian tabloid.
Strategic Goals of Digital Deception
According to Ian Garner, a specialist in Russian propaganda at Warsaw’s Pilecki Institute, disinformation production represents “an old technique, but the technology is new,” with AI making propaganda more effective by allowing it to be “humanized.”
These videos “erode morale in Ukraine by saying ‘look, this man could be your brother,'” while also influencing public opinion in allied countries by hammering home the idea that “Russia’s victory is inevitable.” Within Russia itself, they serve to reassure the population.
Broader Information Warfare Landscape
The European Digital Media Observatory has documented over 2,000 fact-checking articles published in the EU since Russia’s invasion began, with AI playing an increasingly prominent role in manipulative content. The technology’s reach extends beyond video—an October ISD study found that nearly one-fifth of responses from popular AI chatbots cited Russian state-linked sources, many of which face EU sanctions.
While some companies have shown willingness to combat misuse of their tools, Maristany de las Casas notes that “the scale and impact of information warfare are moving much faster than their response.” As the battle for Pokrovsk continues both on the ground and online, the digital front represents an increasingly sophisticated dimension of modern conflict.





