[DE] Study on AI acceptance in journalism

IRIS 2024-5:1/22

Christina Etteldorf

Institute of European Media Law

On 21 March 2024, the Landesanstalt für Medien Nordrhein-Westfalen (North-Rhine Westphalia media authority), one of the 14 German state media regulators, published a study it had commissioned on the acceptance of Artificial Intelligence (AI) in journalism. The study concludes that the majority of people who were questioned are, in principle, open to the use of AI to support the work of journalists. However, based on the results of a number of experiments, the study suggests that, in order to increase acceptance and dispel people’s reservations, transparent regulation is required when AI is used.

New technological possibilities created by process automation and AI can also provide opportunities for media providers, especially in relation to the production of editorial content. With this in mind, the study carried out for the North-Rhine Westphalia media authority examined how people view the use of automated processes in content creation and what can be done to dispel any concerns and reservations. Based on around 1,000 interviews with Internet users aged 14 and above, the survey focused on media consumption habits and, in particular, people’s attitudes to content prepared with the help of AI. The responses of those questioned tended to depend on the subject-matter of the content: the use of AI to produce news articles or political reporting was considered much less acceptable than its use in fields such as sport and entertainment. Around 35% thought that AI could help make journalistic processes more efficient. In particular, they thought it could make it easier to find programmes in media libraries, assist with research activities and help tailor content to users’ preferences. Potential job losses were seen as the main drawback of process automation (51%).

The survey participants were also shown two pairs of video clips (two with a human presenter and two with just a voice-over), with one of each pair having been created using AI. In terms of quality (e.g. whether they were credible, informative, entertaining, understandable, etc.), the clips were considered more or less equal, although in both cases the AI clip with just a voice-over and no human presenter was deemed slightly better than its non-AI equivalent. In both cases, the viewers were unable to clearly tell which clip had been made with the help of AI.

On the basis of the answers given, the study also concluded that AI use is more widely accepted (61%) when reports are produced by “real” journalists and presented by “real” presenters who are “only” supported by AI. Far fewer people favoured reports fully produced by AI (35%). When asked how the acceptance of AI use could be increased, many thought labelling obligations (53%), binding accountability obligations (42%) and supervision of AI use in journalism (40%) were “very important”.


This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.