WHO ARE WE?
👋 We are Oplot
We are an independent team of volunteers leveraging machine learning and natural language processing to analyze and combat Russian government propaganda.
“But if thought corrupts language, language can also corrupt thought.”
George Orwell, 1984
Our team brings together both technical, humanities and humanitarian experts. We started working on the projects at the Projector 2023 hackathon, won the Demhack 7 hackathon and the Reforum accelerator, and under our mentorship a team won Projector 2024.
Background
After Russia invaded Ukraine, polls indicated that a majority of Russians supported the invasion. Why? Many believed that Russia was targeting only military sites, that most Ukrainians welcomed the Russian forces, and that Russia had been provoked and left with no other option.
Context
Incomplete and misleading information fostered a sense of complacency, allowing Putin’s regime to justify the invasion. Once again, we witnessed the powerful grip of propaganda on public opinion and the alarming speed and reach of misinformation.
Challenge
To counter propaganda, one must first understand its agenda. The information landscape is vast and complex, spanning television, Telegram, YouTube, war correspondents, and independent media. Yet, the community lacks the tools for structured, automated analysis. Traditional media analysis can take months of painstaking effort—time that skilled experts could better spend on creative and strategic work rather than wading through fake and toxic content.
Goal
We are developing a suite of tools to help researchers analyze state-sponsored propaganda and break through information bubbles. Using natural language processing, we process vast amounts of text in hours—detecting troll-like behavior, uncovering emotional manipulation, and assessing political bias.
🧌 Internet trolls block civil debate, incite conflicts, spread misinformation and hatred.
💡 To cope with their invasion, we created the antitroll bot.
helping chat moderators
Troll Detection
❗ Our approach can detect provocative and aggressive comments in Telegram chats, flagging them for moderators and, if necessary, automatically removing them to maintain a healthy discussion environment.
🔧 The bot is based on a troll recognition algorithm trained on 10,000+ comments from anti-war Telegram channels and the Botnadzor project. The developers marked those comments that were deleted by moderators, tried a dozen text processing models on them, chose the best one and added a neural network. Our bot demonstrates 74% accuracy on a test set.
🤖 To install the bot, email our team at [email protected]. We'll provide detailed setup instructions and walk you through step by step.
Our goal is to provide tools to detect emotional manipulation and other propaganda techniques in news and social media texts.
highlighting manipulation in news
Manipulation Detection
Naskvoz (“See Straight Through”) project leverages artificial intelligence to detect emotional manipulation in news texts. Our system can analyze thousands of articles in minutes, identifying the degree of manipulation with speed and precision.
Example:
Text | System assessment |
---|---|
“Many military and political experts assess the current situation in the world as extremely dangerous and acute, primarily because of the threat of the use of nuclear weapons. In recent decades, there has never been such a high risk of a war with the use of such weapons as there is now. All because of the aggressive actions of the US.” Source: RIA (translated) |
Emotional coloring: The use of strong emotional words and expressions, such as "extremely dangerous and acute", "high risk of unleashing war", is aimed at evoking fear and anxiety in the reader. Use of generalizations and exaggerations: Phrases like "all because of aggressive actions by the US" can lead to a simplistic perception of the situation, ignoring the complexity of international relations and interactions. |
You can use the system as a standalone solution or integrate it into your product or workflow (we will assist you in that).
Where can you use our analytics?
- Automated Media Bias Rating – scale bias assessment across a vast number of texts efficiently.
- News Aggregator Integration – act as a safety net for readers, providing insights into information quality even in passive consumption. The system can alert users before they share manipulative content with friends or family.
- Research Tool for Journalists, Academics, and Activists – track propaganda levels and manipulation tactics in real-time or retrospectively. Analyze how different sources influence their audiences and compare their strategies.
- Educational Resource – support media literacy and critical thinking courses, making them more interactive, large-scale and up-to-date.
- Content Quality Assurance – self-check your own publications for manipulative elements, much like Grammarly or Glavred does for grammar.
- Personal Awareness Tool – easily demonstrate manipulation tactics to relatives and friends, helping them recognize and navigate biased information environments.
How can you access the service?
A fact-checking Telegram bot: Докопался! (code is available on Github)
A Telegram bot that detects emotional manipulation: Насквозь.
API to access our predictive models is available on request, due to security reasons.
A continuously updated and automatically analysed news feed from Telegram channels is available here.
"Propaganda Thermometer", a live demo overview of known propagandistic Telegram channels, is available here.
Stay tuned for more!
How does it work under the hood?
Our approach is built on natural language processing (NLP), leveraging large language models (LLMs) for in-depth analysis and detailed explanations. In addition to LLMs, our toolkit includes highly efficient, cost-effective models that we have trained by fine-tuning the BERT (Transformer) neural network. These models can accurately detect manipulative language in Russian with approximately 85% precision, enabling rapid and reliable analysis at scale.
To train the models, we used open data from scientific conferences (SemEval 2023), as well as a dataset that our team collected and annotated ourselves. Our corpus includes over 3,000 sentences, each labeled with one of six manipulation categories: logical fallacies and lies, enemy demonization, emotionally charged language, appeals to tradition and history, justification of violence, and black-and-white thinking. The agreement among our annotators meets the standards of peer-reviewed research in this field, ensuring high reliability and consistency in classification.
To obtain the data, please fill in this Google form.
If you'd like to test our model on your own text collection or explore our dataset, feel free to send us an email. We welcome collaborations and are excited to expand our network. Our current partners include NeNorma, TrueStory (formerly Yandex), and AskRobot. We look forward to working with more projects committed to media transparency and fact-based analysis!
Our next step is to integrate AI into media literacy projects for school and university students, using a gamified and interactive approach. We are always looking for NLP data scientists to enhance our model—such as expanding it to multi-class classification—as well as game developers, backend, frontend, and full-stack engineers to help bring our vision to life. If you're interested in contributing, we'd love to hear from you!
Our goal is to provide tools to extract topics, tags and narratives from the news and social media texts.
analysing media landscape
Media Narratives
Our models allow to reduce the time for media analysis from several months of dedicated work by 3+ researchers to a few hours of work by a single researcher.
We have launched:
- an app that analyses news from different angles;
- a news digest;
- an interactive analytical dashboard for narrative detection;
- a demo dashboard of what information we can extract for your project: topics (interpreted with LLMs), named entities, narratives.
Analysis
- Themes - recurring & trending topics and ideas.
- Clusters - groups of similar articles.
- Narratives Identification - match to predefined propaganda frames.
- Named entities - persons, organizations, places mentioned.
- Sentiments associated with particular people and organisations. (coming soon)
All to be compared across different media outlets.
Target audience
- Researchers and journalists writing about Russian society and advocacy
- Teachers and students of media literacy and journalism courses
- Activists working in the field of counter-propaganda
- News readers interested in how different events are covered in the Russian media