Journalists and news audiences are concerned about the use of generative artificial intelligence (AI) in journalism. A new report sheds light on the impact of AI-generated content on news production and trust.
The research, led by RMIT University, summarises three years of inquiries across seven countries focusing on AI-generated content, journalists’ use of AI, and audience perceptions.
While only 25 per cent of news consumers were confident they had encountered AI-generated content, around 50 per cent were unsure, suggesting that news organisations may not be clearly disclosing their AI use.
The report identifies numerous AI applications in journalism, with audiences generally feeling more comfortable when AI is used for behind-the-scenes tasks, such as transcription and idea generation, rather than for editing or content creation.
Biases and transparency
Other key findings highlight various biases in AI, including gender, class, and environmental biases, which stem from training data and algorithmic design. AI models often lack transparency, making it difficult for journalists to verify information.
While AI-generated illustrations are more accepted than AI-replaced photojournalism, concerns persist about misleading content, labour impacts, and newsroom policies. Many journalists struggle to identify AI-generated material, and policies on AI use in newsrooms remain inconsistent or vague.
Both journalists and audiences agree on the importance of transparency, with audiences preferring clear labelling of AI-generated content. Familiarity with AI tools in everyday life generally leads to greater acceptance of their use in journalism.
Significant risks
The risks of AI in journalism are significant. AI-generated images, headlines, and summaries can introduce errors or misleading information. The report cites instances where AI-generated news alerts produced false reports.
Furthermore, AI-generated content can be mistaken for real events, as seen when an AI-created image of Princes William and Harry embracing went viral and was later debunked by journalists.
Audience comfort with AI in journalism varies depending on the task.
People were generally more accepting of AI being used to enrich existing content, such as generating icons for infographics or animating historical images but were uncomfortable with AI-generated avatars presenting the news.
Ultimately, trust in AI’s role in journalism depends on transparency, ethical considerations, and ensuring that AI tools complement rather than replace human judgment.
Cautious optimism
A new report from the Thomson Reuters Foundation also found that journalists in the Global South and emerging economies are cautiously optimistic about AI’s impact on journalism.
While AI is transforming news production, distribution, and consumption, its adoption and accessibility vary globally.
The report, Journalism in the AI Era: Opportunities and Challenges in the Global South and Emerging Economies, is based on a survey of over 200 journalists across 70 countries, conducted between October and November 2024.
The findings reveal that over 80 per cent of respondents use AI for various tasks, with 42 per cent expressing optimism about its future role. However, more than half voiced strong ethical concerns.
Challenges include limited training, unequal digital access, concerns over losing human-centred skills, and a lack of transparency – highlighted by the fact that only 13 per cent of journalists reported that their employers had AI policies in place.
[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]