The Growing Threat of LLM Grooming: How Fake News is Poisoning Language Models
A recent investigation found that Russia’s Pravda network used LLM Grooming to flood AI chatbots like ChatGPT and Gemini with 3.6 million fake articles in 2024, causing them to repeat pro-Kremlin disinformation in 33% of responses.
Source: GuardOS
The New Battlefield: AI and Disinformation
Artificial Intelligence (AI) has revolutionized how we access and process information, but it has also opened new avenues for manipulation. A recent investigation by NewsGuard1 2 has uncovered a sophisticated Russian disinformation campaign designed not just to mislead human readers but to manipulate AI chatbots themselves. This method, known as LLM Grooming, aims to infiltrate the training and retrieval data of large language models (LLMs), ensuring that misinformation becomes embedded in their responses.
Inside the Pravda Network: 3.6 Million Articles of Disinformation
At the center of this effort is a Russian disinformation network called Pravda (Russian for “truth”), which has been flooding the internet with false and misleading narratives. The network published 3.6 million articles in 2024 alone, many of which found their way into leading AI systems, including ChatGPT, Google Gemini, Microsoft Copilot, and Meta AI.
Rather than focusing on persuading individual readers, Pravda’s strategy is to saturate the web with false narratives, increasing the likelihood that AI models will process, retrieve, and unknowingly amplify these misleading claims.
How AI Models Are Being Manipulated
Disinformation campaigns traditionally targeted social media users and news consumers. However, the rise of AI-powered chatbots has given propagandists a new target: the LLMs that power these chatbots. NewsGuard’s study revealed that one-third of chatbot responses tested contained narratives promoted by Pravda.
The method is simple but effective:
- Flood the web with propaganda articles, ensuring they appear in search engine results and AI training datasets.
- Exploit AI retrieval mechanisms, which rely on publicly available information to generate responses.
- Influence chatbot outputs, making them repeat and even validate disinformation narratives.
What False Narratives Are Being Spread?
The Pravda network has been identified as a key source of at least 207 disinformation narratives, with a heavy focus on Ukraine, Western politics, and global conflicts. Some of the most common false claims include:
- The existence of secret U.S. bioweapons labs in Ukraine.
- Accusations that Ukrainian President Volodymyr Zelensky is misusing U.S. military aid.
- False claims that Ukraine banned Donald Trump’s Truth Social platform (despite the fact that it was never available there).
These narratives, often originating from Kremlin-backed media outlets, are repackaged and distributed through a network of 150 Pravda-affiliated websites that mimic legitimate news sources.
The Risks of AI-Manipulated Information
The infiltration of AI systems by disinformation campaigns poses serious political, social, and technological risks. AI chatbots are widely used for research, education, and even decision-making, meaning that false narratives can spread faster and more convincingly than ever before.
Experts warn that as AI becomes more integrated into everyday life, the ability to control its outputs becomes a powerful tool for influence operations. The concept of LLM Grooming—deliberately feeding false information into AI models—could be used by state actors, extremist groups, and other malicious entities to distort global discourse.
What Can Be Done?
The fight against AI-driven disinformation is still in its early stages, but several countermeasures are being considered:
- Stronger content filtering: AI developers must improve fact-checking mechanisms and create better safeguards against retrieving manipulated content.
- Enhanced transparency: AI companies should disclose how models select sources and provide users with information about potential biases.
- Collaboration with fact-checkers: Organizations like NewsGuard, the American Sunlight Project, and other watchdogs can help AI developers detect and flag disinformation campaigns.
- Cybersecurity measures: Governments and private sector players must work together to combat state-sponsored disinformation networks.
Conclusion
The emergence of LLM Grooming represents a new front in the information war. By deliberately feeding false narratives into AI training data, bad actors can subtly influence chatbot-generated responses, shaping public perception in unseen and unprecedented ways.
As generative AI becomes more widespread, ensuring the integrity of its information sources will be crucial. Without proactive countermeasures, language models risk becoming tools of disinformation rather than sources of truth.