International media call for transparency from AI companies
By Alessandro Gisotti
It is time for AI companies to engage in a dialogue with the media about the transparency of sources and the use of journalistic content. This is the main request from the world’s leading media associations through the campaign "Facts In, Facts Out." The initiative is promoted by the European Broadcasting Union (EBU), of which Vatican Radio is a founding member, the World Association of News Publishers (WAN-IFRA), and the International Federation of Periodical Publishers (FIPP). The campaign was particularly inspired by the News Integrity in AI Assistants report produced by the BBC and EBU. The study, published in June 2025, highlighted how AI tools—regardless of geographic location, language, or platform—systematically alter, decontextualize, or even misuse news from trusted sources such as media websites.
AI Tools are not yet a reliable source for news
“For all its power and potential, AI is not yet a reliable source of news and information – but the AI industry is not making that a priority,’ said EBU Director of News, Liz Corbin. This is why the campaign "Facts In, Facts Out" is calling for urgent attention to source transparency. The credibility of journalism is at stake. WAN-IFRA CEO Vincent Peyregne said: “If AI assistants ingest facts published by trusted news providers, then facts must come out at the other end, but that’s not what’s happening today.”
Campaign promoters emphasize that an increasing number of people use AI platforms as a channel for news access. When these tools distort, modify, or even falsify information, the result is a significant erosion of trust in mass media—an essential element for a democratic system. This is why, the campaign underlines, the issue must be urgently addressed, especially considering that AI use for accessing information is only expected to grow in the coming years, particularly among younger generations.
Five principles for information transparency in AI
The "Facts In, Facts Out" campaign is part of the broader initiative News Integrity in the Age of AI, which outlines five fundamental principles for AI companies to follow: 1) No consent – no content. News content must only be used in AI tools with the authorization of the originator. 2) Fair recognition. The value of trusted news content must be recognised when used by third parties. 3) Accuracy, attribution, provenance. The original source behind any AI-generated content must be visible and verifiable. 4) Plurality and diversity. AI systems should reflect the diversity of the global news ecosystem. 5) Transparency and dialogue. Technology companies must engage openly with media organisations to develop shared standards of safety, accuracy, and transparency.
Based on these principles, the goal is to work together towards ensuring information truthful and credible. For Liz Corbin of EBU, “this is not about finger-pointing; we are inviting the tech companies to engage in a meaningful dialogue with us. The public rightly demands access to quality and trustworthy journalism no matter what technology they use.”
Thank you for reading our article. You can keep up-to-date by subscribing to our daily newsletter. Just click here