Best Practice Guidelines for Responsible AI and Disinformation Response in (Transnational) Journalism

Best Practice Guidelines for Responsible AI and Disinformation Response in (Transnational) Journalism

04/02/2026

Perspectives from the 2nd European Narrative Observatory

Best Practice Guidelines for Responsible AI and Disinformation Response in (Transnational) Journalism

Disinformation has become one of the defining challenges of our digital era. It spreads across borders with ease, exploiting the same infrastructures that connect us. What begins as a local rumour can quickly evolve into a transnational narrative, amplified by algorithms, and echoed across platforms, networks, and languages. In this environment, journalists and researchers face a moving target: not just falsehoods, but shifting ecosystems of influence that shape how people understand truth, trust, and sometimes even democracy.

The PROMPT project was created to respond to this challenge. It brings together journalists, researchers, and (fact-checking) organisations from across Europe to study how disinformation narratives are being formed, how they can be detected, and how information can be coordinated across borders. PROMPT’s work bridges practice and research - combining interviews with fact-checkers, case studies of national elections (Romania and Moldova), and the development of AI-tools such as the Corpus Analyser, the Disinfo Scanner, the Wikipedia Sensitivity Meter and Barometer, all designed to make disinformation patterns and networks visible and traceable.

These Best Practice Guidelines grow directly out of that collaborative work. Its aim is not to reinvent the wheel: there are already numerous ethical frameworks and policy recommendations addressing (generative) artificial intelligence (AI) and information integrity (see e.g. UNESCO 2024). Instead, PROMPT seeks to bring existing principles together, test them in practice, and ground them in the real experiences of journalists, fact-checkers, and media professionals working in diverse European contexts.

Artificial intelligence plays a central role in this conversation. AI tools increasingly shape how information is created, verified, and distributed. They offer powerful opportunities for detection, translation, and analysis - but also introduce new risks of bias, opacity, and fabrication. Within PROMPT, AI is treated not as a replacement for human work and judgment, but as a tool that must be used responsibly, transparently, and under clear editorial oversight.