Global news organizations are issuing a stark warning to artificial intelligence developers, urging immediate action to safeguard truth and protect the integrity of journalism. As the world grapples with a surge in AI-generated content, media leaders caution that the technology is eroding public trust and threatening democratic processes. The call comes on World News Day from Liz Corbin, Director of News at the European Broadcasting Union (EBU), and Vincent Peyrègne, CEO of WAN-IFRA, the World Association of News Publishers, among others, who argue that individuals deserve factual information, not AI’s potentially deceptive interpretations.
From geopolitical conflicts like the war in Gaza to essential services such as healthcare or local bus route changes, the need for trustworthy news is paramount. Yet, the proliferation of sophisticated AI-generated content makes it increasingly difficult for audiences to discern fact from fiction. “We must constantly question what’s real and what’s the creation of artificial intelligence,” the authors state, highlighting that AI’s convincing output fuels mistrust, conspiracism, social polarization, and democratic disengagement.
Recent research conducted by the BBC this year underscores these concerns, revealing that half of AI-generated answers to news-related queries omitted crucial details or contained significant errors. AI assistants frequently produced garbled facts, fabricated or misattributed quotes, decontextualized information, and unacknowledged paraphrasing of original reporting. Compounding this, the work of professional journalists — particularly from local, regional, and independent media — is often scraped without permission, then repackaged and redistributed by AI systems without credit or compensation. This subtle form of inaccurate content, media leaders argue, is a “sabotage of news” that drains already depleted public trust.
While many broadcasters and news publishers are responsibly integrating AI to enhance their journalism — such as for automating translation, detecting misinformation, or personalizing content — they emphasize that these tools must be deployed ethically, transparently, and carefully. The EBU and WAN-IFRA, alongside a growing coalition of journalistic organizations, are therefore demanding urgent changes from AI developers.
They have presented five clear, common-sense requirements that they believe any ethical technology developer should adopt:
1. **No Content Without Consent:** AI systems must not be trained on news content without explicit permission. Unsanctioned scraping is deemed intellectual property theft that undermines both rigorous work and public trust.
2. **Respect Value:** High-quality journalism, expensive to produce but vital for societal well-being, demands fair compensation from AI tools that benefit from it.
3. **Be Transparent:** When AI-generated content relies on news sources, those sources must always be clearly cited and linked, ensuring accuracy and proper attribution.
4. **Protect Diversity:** AI tools should actively amplify pluralistic, independent, public interest journalism to foster a robust and healthy information environment.
5. **Work With Us:** AI companies are invited to engage in serious, solutions-driven dialogue with the news industry, partnering with journalists rather than merely viewing them as free data suppliers.
The news leaders frame this not just as an industry issue, but as a “civic challenge” impacting every individual who relies on credible information to make life decisions, form opinions, or cast their vote. They urge leaders of the AI revolution to address these critical issues now, emphasizing that while tech companies frequently discuss trust, it must be built on tangible actions. Without urgent corrective measures, they warn, AI risks not only distorting the news but destroying the public’s fundamental ability to trust, leading to disastrous consequences for society as a whole.

