08/02/2025
We’ve started 2026 at full steam: PROMPT analysts have issued several blog posts for the European Journalism Observatory; we are finalising our beta-testing campaign with the EDMO network to fine-tune our disinformation dissection tools; we are delving into the information manipulation attempts that may target Wikipedia in the context of the upcoming French local elections… And we are opening seats for PROMPT’s closing event!

Registration for the PROMPT closing event, hosted by the Infox sur Seine Days 2026 in Paris, on 26 February is now open. Registration is free but mandatory, and seats are (very) limited!
With 18 months of research in our basket, we’ll support with evidence several “tough conversations” on the future of the fight against disinformation and the role of AI in it. Key moments to look forward to:
- Insights from inside the Russian propaganda machine in Africa, a Forbidden Stories investigation;
- What disinformation costs — and what fighting it costs;
- Does a whole-of-society approach to disinformation work? What can we do about it?;
- “Counterpropaganda” in democratic contexts: how to act in an asymmetrical fight against the “disinformation industry”?;
We’ll also be showcasing the new Wikipedia Sensitivity Barometer and the results of our monitoring of information manipulation attempts in the context of the French local elections. More info very soon!
→ To grab one of the last seats, please register here
LGBTQ+ rights as a culture-war battleground: a PROMPT-based analysis (EJO) traces how recurring anti-LGBTQ+ narratives — “protecting children,” “gender ideology,” institutional capture — get repackaged and amplified across social media ecosystems, and why these loops may cause real-world harm.
Best Practice Guidelines: from principles to newsroom reality: an EJO piece on PROMPT’s Best Practice Guidelines for the use of AI in journalism explores the gap between “responsible AI” talk and what editorial teams can actually do under time pressure — and how to translate guidelines into concrete routines.
New MOOC module (COPE): AI + disinformation + media literacy: a new EJO article presents PROMPT’s online course offering practical grounding on how AI can amplify misleading narratives and support detection and verification workflows.
Looking back on what we learnt from the Moldovan election, we released two pieces that help better understand what role “fear” narratives played in the process; and what needs to be done to better address electoral disinformation in the future:
How disinformation exploits fear: a Euractiv.ro / DeCode investigation tracks the storyline “Moldova is next Ukraine” in the run-up to the September 2025 parliamentary elections. First seeded on Telegram, then repackaged into “news” by pseudo-informational sites(including the Pravda network), and finally amplified on TikTok.
In parallel, Andra-Lucia Martinescu (Diaspora Initiative) frames Moldova’s 2025 elections as a question of democratic integrity and resilience — asking what it means to “take the fight upstream” when pressure builds cumulatively through recurring narratives and cross-platform dynamics, rather than through one-off viral falsehoods.
Extending this lens to neighbouring Romania, Andra-Lucia Martinescu has also brought together a growing body of evidence on how electoral discourse gets shaped by recurring frames (distrust in institutions, legitimacy contests, diaspora-focused blame) and then activated across platforms during moments of tension — with attention to manipulation campaigns, disinformation sources, and coordinated networks in the Romanian information space.
PROMPT took part in Partisan’s Political Tech Review on Hybrid interference in the Moldovan elections (11 December). During the session, Andra-Lucia Martinescupresented her analysis of Moldova’s 2025 parliamentary elections, developed as part of PROMPT’s Second Narrative Report, to show how hybrid interference unfolds across platforms, how electoral narratives are gradually recalibrated over time, and why moving beyond isolated incidents is essential to understanding pressure on democratic processes.
Over the past weeks, PROMPT’s tools were tested at:
- EJO Annual Meeting (21 November): focus on journalistic practices and emerging best-practice guidelines.
- Open webinar (28 November): MOOC dissemination + a first public demo of the tool.
- Beta-testing session (11 December): focus on the PROMPT Corpus Analyser.
If you haven’t tested the tool, join our beta-testing program! → You can sign up via the link at the bottom of PROMPT’s website.
A new paper by Dr. Clément Bénesse (opsci.ai) in the Paris Journal on AI & Digital Ethics (PJAIDE) proposes a way to detect slow co-optation of “trusted” content creators — the kind that doesn’t look like a sudden pivot, but more like a long, quiet drift.
The core idea is to avoid ideological labelling and focus on behavioural change: creators’ posts are mapped into semantic embeddings, treated as a black box, then modelled as a time-evolving footprint: Hidden Markov Models for inter-topic transitions + Ornstein–Uhlenbeck processes for intra-topic semantic drift, with anomaly/coordination signals derived via metrics like “cost of postage” and structural comparison using Wasserstein distances.
Practically, the method is designed to return time windows of interest for human analysts to investigate (rather than “naming” ideology), which is exactly the kind of neutral, operational output public and civic actors can actually use.
The opsci.ai team had the chance to attend the 2025 edition of the Political Tech Summit in Berlin (23–24 January 2026) to present the PROMPT project, its technical stack, products and insights. Alongside the European Platform for Democratic Elections (EPDE), talking about election integrity.
Partisan, the organisation behind the event, has joined the third edition of the European Narrative Observatory as a partner, so let's meet at Political Tech Summit 2027 with exciting developments!
- Médias en Seine (Paris, 15 January 2026), where opsci.ai COO Jordan Ricker discussed with media representatives from NewsGuard, Radio France and Ziarul de Gardă, how media face informational war.
- University of Giessen (29 January 2026): PROMPT’s project coordinator Martin Lestra discussed the opportunities and pitfalls of finding and using open data in the fight against disinformation.
And we’ll be happy to meet you at:
- AI Day 2026 at Station F, Paris on 10 February, for a roundtable titled "AI's Dual Nature: Can AI fight the Disinformation it creates?" with Jordan Ricker.
- The EU Media Literacy Expert Group on 5 March in Brussels.