Guidelines on the responsible implementation of artificial intelligence systems in journalism

IRIS 2024-2:1/7

Urška Umek

Council of Europe, Directorate General of Democracy and Human Dignity

Recent years have seen a rapid change in the way we consume and produce news, information, and entertainment. The rise of artificial intelligence (AI) has opened new frontiers in journalism. Algorithms can now help with complex data analysis and fact-checking; they can power news recommendation systems to deliver personalised and engaging content to audiences; and they can also generate articles and video content.

AI can be a valuable tool for journalists; it can greatly improve the efficiency of newsgathering and reporting. But it also raises many questions as to what it means for the future of journalism. As more and more news organisations incorporate AI-powered systems into their professional practices, important legal and ethical issues arise. Will AI facilitate journalists’ work, or will it eventually replace them? Which journalistic processes are – and which are not – suitable for automation? If an AI algorithm produces inaccurate, biased or misleading content, who is responsible? How can editorial values be translated into algorithms? How to ensure proper oversight of the use of AI in journalism?

Also, AI systems do not have the ability to critically assess the sources of information on which they are trained, so AI-generated stories can lead to the spread of mis- and disinformation. In journalism, mistakes can be costly and can easily undermine public trust in the media, so there is also a need for transparency vis-à-vis the audience. These questions highlight the need for clear and transparent guidance around the use of AI in journalism.

In the past two years, the Committee of Experts on Increasing Resilience of Media (MSI-RES), together with member states' representatives, researchers in the fields of journalism, information law and technology, members of journalists' associations and civil society organisations, has developed such guidance in the form of a soft-law instrument – Guidelines on the responsible implementation of artificial intelligence systems in journalism. These guidelines were adopted by the Council of Europe’s Steering Committee on Media and Information Society on 30 November 2023.

They are a practical tool detailing how AI systems should be used to support the production of journalism. They focus on the use of journalistic AI, that is, technologies which support the core business of journalism, namely producing information, ideas and opinions about contemporary affairs. The text first addresses news media organisations, covering different stages of journalistic production from the decision to use AI systems, the acquisition of AI tools and their incorporation into professional practice to the external dimension of using AI in newsrooms, that is, how it affects the audiences and wider society.

The key idea of the guidelines is that the use of AI should not only facilitate journalistic work and the sales of media products but should also be used in a way that promotes the society’s interests in being informed. AI should support the functioning of the media as a forum for public discourse and a public watchdog. The guidelines include a list of factors which need to be considered by news media organisations when implementing AI systems into their work. They also include guidelines on how to implement AI in a way that does not undermine the accuracy and credibility of news content.

For example, the use of AI is an editorial decision and requires editorial oversight of outputs to prevent or mitigate bias and false information. News organisations should carry out appropriate risk assessments before opting for specific AI solutions. The guidelines further call for the disclosure of use where AI systems could meaningfully affect the audience's rights or influence how they interpret the outputs. The guidelines also talk about how the use of AI involves new values and priorities in relation to the audience, such as the transparency and explainability of AI, respect for privacy and data protection, cognitive autonomy, etc.

In addition, there are three sections for other addressees: the guidelines propose specific responsibilities for technology providers which develop and design AI systems used for journalistic production. They also provide a summary of the existing guidance applicable to online platforms which disseminate news. Finally, the guidelines include obligations for states, with guidance on how they can support quality and sustainable journalism both through financial support and regulation – by introducing standards for the responsible development and use of journalistic AI, for example for labelling synthetic content (produced by AI systems), an issue which is much discussed at the moment as it relates to the authenticity of journalistic content.

The guidelines also include two annexes; the first one is very practical and includes a procurement checklist specifying the most important considerations that should lead the processes of acquiring AI systems and implementing them in the media organisations’ professional practices. The second annex includes a summary of all relevant Council of Europe instruments, detailing specific concerns related to the use of technology and the solutions provided in the organisation’s texts.


References


This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.