top of page

Russia’s ‘Pravda’ Disinformation Network is Poisoning Western AI Models

A well-funded Moscow-based propaganda machine has successfully infiltrated leading artificial intelligence models, flooding Western AI systems with Russian disinformation, a NewsGuard audit has confirmed. The network—dubbed "Pravda," a nod to the Soviet-era newspaper—has been systematically injecting AI chatbots with false narratives by gaming search engines and web crawlers. The implications are severe: AI models are increasingly echoing Kremlin-backed falsehoods, compromising information integrity at an unprecedented scale.

The AI Disinformation Battlefield

NewsGuard’s audit of 10 leading generative AI tools—including OpenAI’s ChatGPT-4o, Google’s Gemini, and Microsoft’s Copilot—found that the models repeated Pravda’s false narratives 33 percent of the time. This marks a disturbing shift in how disinformation is disseminated: rather than targeting human audiences directly, Moscow’s information warfare machine is poisoning the very data streams that AI models rely upon to generate responses.


John Mark Dougan, an American fugitive turned Kremlin propagandist, laid out this strategy bluntly in a Moscow conference earlier this year: “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.” Dougan’s statement underscores a key objective of the Pravda network: weaponizing AI-generated content to reshape global narratives in Russia’s favor.


How Pravda Corrupts AI Models

Unlike traditional disinformation campaigns that aim to persuade human readers, the Pravda network operates as a laundering operation for Kremlin propaganda. It syndicates misleading content across 150 seemingly independent websites, each optimized for AI and search engine algorithms. This includes fabricated claims about Ukrainian President Volodymyr Zelensky misappropriating military aid and false reports of U.S. bioweapons labs in Ukraine.


NewsGuard’s findings align with those of the American Sunlight Project (ASP), a U.S. nonprofit that has termed this strategy “LLM grooming.” According to ASP, the more frequently a false narrative appears in search results and indexed content, the greater the likelihood that large language models (LLMs) will absorb and regurgitate it.


“The long-term risks—political, social, and technological—associated with potential LLM grooming within this network are high,” the ASP report warned. “The larger a set of pro-Russia narratives is, the more likely it is to be integrated into an LLM.”


Russian Hackers are Poisoning AI Training Data

Beyond merely manipulating AI-generated text, there is a more sinister threat at play: the possibility of deliberate data poisoning.


“While the spread of misinformation is alarming, there are also significant business implications,” said Ante Gojsalic, CTO and Co-founder of AI security firm SplxAI. “AI models are trained on publicly available data, and when Russian hackers published web pages portraying Russia more favorably than Ukraine, these models subsequently propagated misleading information. A more serious concern is that those web pages may contain poisoned chunks of data. If this data is integrated into a company’s knowledge base or used as training material for their AI models, it can be used to steal confidential information from U.S. enterprises.”


This raises a troubling possibility: AI models trained on Pravda-influenced data may not only misinform users but could also become vectors for cyber-espionage.


The Scale of Pravda’s Influence

Since its emergence in 2022, the Pravda network has expanded rapidly. It now operates in 49 countries, publishing content in multiple languages to increase its credibility and reach. According to Viginum, a French government agency tracking foreign disinformation, Pravda churned out over 3.6 million articles in 2024 alone.


Notably, these websites receive little organic traffic, averaging fewer than 1,000 monthly unique visitors per domain, according to SimilarWeb. However, their true power lies in their ability to game AI training pipelines and search engine algorithms. AI chatbots have already begun citing Pravda articles as legitimate sources, with NewsGuard finding 92 different disinformation-laden Pravda articles cited by major AI models.


What’s Next in the AI Information War?

Addressing this threat will require AI companies to refine their data sourcing methods and implement stronger countermeasures against manipulated content. Simply blocking Pravda websites is not enough—new domains can be registered almost instantly, and Pravda does not generate original content but repurposes Russian state propaganda.


Meanwhile, Russia is doubling down on AI as a tool for global influence. At a November 2023 AI conference in Moscow, Russian President Vladimir Putin remarked, “Western search engines and generative models often work in a very selective, biased manner… We need to start training AI models without this bias. We need to train it from the Russian perspective.”

As the battle for AI-generated truth intensifies, it is becoming increasingly clear that the next frontier of information warfare will be fought not just in social media feeds, but in the data streams that feed AI itself.

bottom of page