Commission extends COVID-19 Disinformation Monitoring Programme and publishes Code of Practice on Disinformation reports
Ronan Ó Fathaigh
Institute for Information Law (IViR)
On 28 January 2021, the European Commission announced an important extension of the COVID-19 Disinformation Monitoring Programme, which is a transparency mechanism designed to ensure public accountability for the measures taken by signatories to the EU Code of Practice on Disinformation in specifically tackling COVID-19 disinformation (see IRIS 2020-6/9; IRIS 2019-6/4, and IRIS 2019-1/7). The Commission stated that the Programme would be extended to June 2021, with a “special focus” on vaccine disinformation and vaccine-related misinformation.
Notably, the Commission also published a set of reports on measures taken by the Code of Practice signatories to tackle COVID-19 disinformation, including Facebook, Google, Microsoft, TikTok and Twitter. The Commission stated that platforms had (a) blocked “hundreds of thousands” of accounts, offers and advertiser submissions related to coronavirus and vaccine-related misinformation; (b) enhanced the visibility of “authoritative content”, with “millions of users” directed to dedicated informative resources, and (c) “stepped up their work” with fact-checkers to make fact-checked content on vaccination more prominent. However, the Commission has asked platforms to provide more data on the evolution of the spread of disinformation during the COVID-19 crisis, and on the “granular impact of their actions at the level of EU countries.”
Crucially, the platform reports include a number of notable measures undertaken, including the following: first, Google updated its YouTube policy in October 2020 to include vaccine disinformation, which “led to the removal of more than 700 000 videos related to dangerous or misleading COVID-19 medical information” and it also suspended the accounts of more than 1 800 EU-based advertisers for trying to circumvent its systems, including for COVID-19-related ads and offers; secondly, Twitter expanded its COVID-19 “misleading information policy” to cover misleading information about vaccines, so that tweets advancing harmful false or misleading narratives about COVID-19 vaccinations will be removed; thirdly, Facebook stated that it had removed false claims about vaccines that had been “debunked by public health experts” on Facebook and Instagram, and had re-launched a pop-up on Facebook’s News Feed to direct users to the “Facts about COVID-19” section of its COVID-19 Information Centre; fourthly, Microsoft blocked over 323 000 advertiser submissions in the European Union directly related to COVID-19 and vaccine-related misinformation; and fifthly, TikTok reported that from 21 December 2020, it has been rolling out a new vaccine tag for all videos with words or hashtags related to COVID-19 vaccines. Finally, other measures taken by platforms include providing grants and free ad space to governmental and international organisations to promote campaigns and information on the pandemic, and increasing the visibility of content that is fact-checked.
The Commission stated that it would assess the situation further in June 2021, and has asked platforms to “address shortcomings “ previously highlighted, including providing more data on the impact of the measures taken.
- “Coronavirus disinformation: extended platforms' monitoring programme with focus on vaccines”, 28 January 2021
- European Commission, “Latest set of reports and the way forward: Fighting COVID-19 Disinformation Monitoring Programme”, 28 January 2021
This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.