spot_img
Thursday, November 7, 2024
spot_img
HomeNewsTrust issues… AI’s double-edged sword cuts a fine line for modern journalism...

Trust issues… AI’s double-edged sword cuts a fine line for modern journalism – Euractiv

-


The rise and integration of artificial intelligence into our daily lives is no longer a futuristic fantasy, but new data shows AI in the media sector might do more harm than good.

Whether AI leads to better journalism is an ongoing debate, but more media companies across the world aren’t waiting to find out which side up new tech lands. The media sector is jumping on the trend and embracing the latest developments in the hope of engaging audiences, keeping revenue above water, and sustaining their business models.

Data published by Bentley University and Gallup in their latest Business in Society Report shows that a majority of Americans (56%) believe AI, in general, does equal amounts of harm and good.

However, the percentage of those who believe that the harm caused by AI outweighs its goods is still greater than that of those who believe the opposite, as shown in the graph below.

A previous study conducted in 2023 showed that while Americans fear AI will reduce jobs, they also recognise benefits it could bring in some areas where it performs as well as or better than humans.

This includes tasks such as customising content users may see online (68%), recommending products or services to them (65%), assisting students with homework or studying (60%) and various others.

A completely different picture is painted in the European Union. “The Digital Decade”, a Eurobarometer report published in July, highlights the impact the digitalisation of daily public and private services has had on citizens’ lives.

Almost three-quarters of Europeans (73%) consider digitalisation to have made their lives easier, while just under a quarter (23%) say the opposite. Those who have seen no change in their lives or do not know, constitute a notably inconsiderable percentage.

Citizens were also asked to identify some of the issues with the biggest personal impact on them, in the context of the EU’s enforcement of legislation regulating the behaviour of online platforms.

The misuse of personal data (46%) and fake news and disinformation (45%) topped the list of concerns, followed by the issue of insufficient protection for minors (33%) and others such as hate speech or terrorist content etc.

In this regard, just under eight in ten (78%) think it is important for public authorities to shape the development of AI and other digital technologies to ensure they respect citizens’ rights and values.

To get a sense of how global audiences would react to the involvement of AI in news, the Reuters Institute for the Study of Journalism firstly examined the general awareness on the subject, through a survey of nearly 100,000 people across 47 countries.

Less than half (45%) of respondents said they had heard or read a large or moderate amount about AI, slightly more of them (49%) said they had heard or read a small amount, while 6% said they did not know.

Citizens found themselves more comfortable with the idea of AI supporting behind-the-scenes tasks such as translation or transcription, rather than it replacing journalists.

Data show that less than half of respondents in selected markets currently feel comfortable using news made by humans with the help of AI. Much less are comfortable using news made mostly by AI with human oversight.

Examining the trust element, the qualitative findings of the study suggest that individuals trusting news outlets described as reputable or prestigious, tend to be more open to these organisations using AI.

These audiences exhibit greater faith in publishers’ abilities to use AI responsibly, while untrusting audiences may find their trust further diminishing by the implementation of new technologies.

A global POLIS survey on journalism and artificial intelligence led by the director of the media think tank at the London School of Economics, Charlie Beckett, found that newsrooms mostly use AI in news gathering, production and distribution.

The report argues that, while AI is giving journalists more time by freeing them from routine labour and providing power to work on creating better journalism, it comes with editorial and ethical responsibilities.

This comes at a time when society’s trust in media stands at around 40%, according to the Reuters Institute for the Study of Journalism report, as layoffs and closures due to higher costs and less advertising revenues are not unheard of.

While there is no disputing the impact of AI in the way information is disseminated, media professionals must be provided with the relevant tools and knowledge to use such technology as best as possible.

“If we value journalism as a social good, provided by humans for humans, then we have a window of perhaps 2-5 years, when news organisations must get across this technology,” remarked Beckett in his report on AI and journalism.

[By Xhoi Zajmi I Edited by Brian Maguire | Euractiv’s Advocacy Lab ]





Source link

Related articles

spot_img

Latest posts