(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,”_mgq”);
Dozens of AI-generated videos depicting fabricated Ukrainian soldiers surrendering or weeping have spread across TikTok, Instagram, Telegram, Facebook, and X in November, accumulating millions of views before platform interventions, AFP reports.
As Russia struggles to capture Pokrovsk on the battlefield—deploying one-third of all frontline clashes and half of its glide bomb strikes—the Kremlin is opening a second front online. The fake videos aim to erode Ukrainian will to fight by depicting supposed soldiers as broken and desperate, according to propaganda specialists.
How AI deepfakes target Ukrainian morale
The fake videos show soldiers in Ukrainian uniforms crying, begging not to be sent to the front, or claiming to be “leaving Pokrovsk,” AFP reported. Visual inconsistencies typical of generative AI remain detectable: in one video, a supposed soldier walks normally despite wearing a leg cast, while a stretcher levitates and disembodied legs fade in and out of the background.
Some videos bear the logo of OpenAI’s Sora video creation tool. Others appropriated the likeness of real people, including exiled Russian YouTuber Alexei Gubanov.
“Obviously it’s not me,” Gubanov said in a YouTube response. “Unfortunately, a lot of people believe this… and that plays into the hands of Russian propaganda.”
Ian Garner, a specialist in Russian propaganda at the Pilecki Institute, told AFP the disinformation represents “an old tactic, but the technology is new.” The videos work by “chipping away at Ukrainian morale, saying: ‘Look, this is somebody just like you, it could be your brother, your father.'”
Coordinated spread across platforms and languages
AFP found the fake content circulating in Greek, Romanian, Bulgarian, Czech, Polish, and French across multiple platforms, including on a Russian weekly’s website and in a Serbian tabloid. TikTok told AFP it deleted accounts behind the videos—but not before one garnered over 300,000 “likes” and several million views.
The European Digital Media Observatory, an EU-funded fact-checking network, has published more than 2,000 articles related to Ukraine war disinformation since 2022, with AI becoming an increasingly prevalent topic.
Pablo Maristany de las Casas of the Institute for Strategic Dialogue said the videos fit a “broader narrative that we’ve seen since the beginning of the invasion” portraying Zelenskyy as forcibly sending civilians to fight.
Tech companies struggle to keep pace
An Institute for Strategic Dialogue study published in October found that among chatbots tested, nearly one-fifth of responses cited Russian state-attributed sources—indicating Kremlin narratives have penetrated AI systems themselves.
OpenAI told AFP it had conducted an investigation without elaborating. But Maristany de las Casas warned that “the scale and impact of information warfare outpace the companies’ responses.”
Carole Grimaud, a researcher at Aix-Marseille University, said the videos “instrumentalise uncertainty to sow doubt in public opinion.” While individual impact is difficult to measure, “when it is repeated, it is possible that people’s perceptions change.”
Read also:
Ukrainian raiders stalk Russians in Pokrovsk—but hundreds of tanks are coming
ISW: Ukraine’s limited presence and counterattacks hinder Russia’s bid to take Pokrovsk
(function(w,q){w[q]=w[q]||[];w[q].push([“_mgc.load”])})(window,”_mgq”);