Germany

[DE] BLM Media Council adopts AI guidelines

IRIS 2024-1:1/23

Katharina Kollmann

Institute of European Media Law

At its ninth meeting held on 19 October 2023, the Medienrat (Media Council) of the Bayerische Landeszentrale für neue Medien (Bavarian New Media Authority - BLM) adopted new guidelines on the use of artificial intelligence (AI) in journalism. Designed to protect the credibility of journalism and preserve democratic debate, the guidelines are merely an initial set of recommendations regarding the use of AI systems in journalism. In view of AI’s rapid development, however, they will need to be continuously updated.

The authors accept that the use of AI in journalism has some benefits. For example, it can ease the burden of repetitive tasks and research activities, search through archives and documents or pre-filter content in order to provide initial protection against hate messages on social media. However, they also warn that AI systems bring certain risks, such as a lack of transparency around decision-making processes (so-called "black box technology") or misuse in journalism for targeted disinformation campaigns.

In the absence of a universal definition of AI, the BLM guidelines define it as “technologies that enable computers and machines to imitate human cognitive skills such as logical thinking, learning or creativity. Using algorithms, these technologies analyse data and recognise patterns so they can fulfil tasks, solve problems and make decisions independently.”

The first guideline is “Observe journalistic due diligence”: even when AI is used, research and reporting must meet journalistic quality standards such as objective reporting, careful presentation and research, and fact-checking prior to publication. This particularly concerns the disclosure of information sources and technical aids used.

The second guideline is “Editorial responsibility remains with people”: AI results should not be trusted unconditionally. Responsible use of AI must include the possibility for humans to make corrections. Approval processes at editorial level must also be clearly regulated and a complaints body set up.

The third guideline is “Label transparently”: AI use must be appropriately labelled when published content is produced and when it is used to moderate content. For example, the technology used, the data collected and the person responsible for the published content should be identified.

The fourth guideline is “Certify AI voluntarily”: certified AI that meets certain security and quality standards should be used if possible.

The fifth guideline is “Keep an eye on copyright and exploitation rights”: journalists must look out for infringements of third-party copyright in particular. At the same time, the remuneration rights of media professionals must also be respected if works they have created are used by or with the help of AI.

The sixth guideline is “Comply with relevant data protection laws”: when data is collected, prepared or processed using AI, data protection laws must be upheld, especially where personal data is concerned.

The seventh guideline is “Enable balanced opinion formation despite personalisation”: even when AI is used, reporting must be balanced, diverse and neutral. AI data sources must be scrutinised regularly. Filter bubbles resulting from personalised content should be avoided.

The eighth guideline is “Stay critical”: journalists should remain critical regarding the results of generative AI and data sources used in order to stop existing prejudices being exacerbated or to prevent overconfidence in AI results despite a lack of quality control, for example.

The ninth and final guideline is “Relieve staff instead of replacing them”: AI can ease the burden on staff, but can never replace them. The objective of AI use in day-to-day editorial work should be to create a “balanced relationship” between machine and human activity.


References



This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.