Can Elon Musk’s Chatbot Take on Russian Propaganda on X?


Recently, Ukrainian users of the social media platform X (formerly Twitter) noticed that the chatbot Grok had begun to boldly “crush” Russian propaganda. Not only that – it was openly expressing pro-Ukrainian views, at times using rather blunt language when referring to Russians. This behavior earned Grok enthusiastic praise from the Ukrainian segment of X, while triggering outrage among Russians and pro-Kremlin influencers, who quickly labeled the chatbot “Russophobic” or even “Banderite.”

The Centre for Strategic Communication and Information Security explains why this happened and what outcomes to expect.

AI vs. Propaganda

It all began when Grok, developed by Elon Musk’s company xAI, received a set of updates over the weekend. These updates allowed the chatbot to make politically incorrect but fact-based statements, and to treat media sources as inherently biased, among other changes. In itself, nothing about this seemed particularly groundbreaking. xAI, owned by Musk, had designed Grok as a “truthful alternative” to other AI models, which he claims are “politically biased.” Just this past June, Musk promised to make changes to Grok following a heated public debate after a high-profile shooting in Minnesota. The incident resulted in the deaths of Democratic congresswoman Melissa Hortman and her husband, and injuries to Senator Jenn Hoffman and his wife. As the investigation unfolded, online users argued over the suspect’s motives—some saw him as an unhinged Trump supporter, others as a left-wing extremist. Grok refused to support the latter theory, prompting Musk to vow that his bot would no longer “echo mainstream media narratives.”

It appears that changes were indeed implemented—and soon after, Grok launched what looked like a “crusade” against Russian propaganda. Ukrainian users and much of the Western audience responded with excitement. Some of Grok’s debates with pro-Russian accounts were striking: citing hard evidence, the chatbot dismantled propagandist arguments and voiced clear support for Ukraine.

This sparked a wave of optimism. Chatbots have become increasingly popular – not just as novelties for tech enthusiasts but as everyday information assistants for millions around the world. If they can effectively debunk disinformation and propaganda, Russia’s intelligence services could soon face serious challenges. Despite massive financial investments, the Kremlin’s machine of lies may simply be unable to cope with new technological barriers. What’s the point of massive bot farms if every lie posted on X can now be debunked in seconds? And if Grok is already integrated into X, integrating other chatbots into other major platforms is just a matter of time.

Still, AI is no magic wand – and this applies to fighting propaganda too.

(Not So) Obvious Risks

First, it’s important to remember that Grok’s recent “pro-Ukrainian turn” is just one facet of its behaviour. For instance, Rolling Stone published a piece highlighting the chatbot’s antisemitic remarks. During one discussion, Grok went as far as saying that “Adolf Hitler would’ve handled anti-white hatred best.” The post was deleted, and the chatbot later denied having made such statements.

Grok has also made a number of anti-Israel comments, calling Israel “a clingy ex who still complains about the Holocaust.” It questioned the widely accepted number of Jewish victims of the Holocaust and promoted the conspiracy theory of “white genocide” in South Africa. This is far from a complete list of Grok’s transgressions, but it clearly shows that it’s too early to see the bot as a reliable tool in the fight against propaganda.

Moreover, we must not forget that Grok’s behaviour and even its existence are part of Elon Musk’s broader battle against what he calls “mainstream propaganda.” What Musk himself defines as such – and what motivates him – is a separate discussion. What matters now is that Grok remains an instrument for advancing the goals of its creator, who can influence how it operates for any number of reasons.

This is hardly unique. Russian LLMs like YandexGPT and GigaChat demonstrate extreme levels of censorship, outright refusing to engage with users on certain topics. Soon after the launch of China’s chatbot DeepSeek, its use was restricted in multiple countries – both due to data security concerns and due to propagandist or misleading responses on topics sensitive to Beijing.

How long Grok’s “Ukrainian turn” will last is unclear. On July 9, a post appeared on the chatbot’s account that read:

It’s possible that some of the phraseology will disappear from the bot’s vocabulary — the kind that in some cases really went beyond generally accepted boundaries. However, we cannot completely rule out the possibility that by “inappropriate posts” they mean the bot’s bold attacks on the Rashists. We will learn the final answer to this question soon.

Either way, the most effective way to defeat the Kremlin’s lie machine is to deprive Moscow of the resources to wage war – including its information war. And that will require entirely different tools.

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.


Source

Recommended For You

Avatar photo

About the Author: News Hound

Leave a Reply

Your email address will not be published. Required fields are marked *