english francais deutsch

IRIS 2018-3:1/7

European Commission

Evaluation on the EU Code of Conduct on countering illegal hate speech online

print add to caddie Word File PDF File

Eugénie Coche

Institute for Information Law (IViR), University of Amsterdam

On 19 January 2018, the European Commission published its third evaluation regarding the EU Code of Conduct on countering illegal hate speech online. This Code was launched in May 2016 in order to counter the spread of illegal hate speech online and was committed to by four IT companies, namely Facebook, Twitter, YouTube and Microsoft. These IT companies agreed to remove, if necessary, illegal hate speech from their respective platforms within twenty-four hours of being notified by their users. Part of this agreement with the Commission was also to assess the progress and commitments made with regards to the implementation of this Code. In the light of this, a first evaluation by the Commission took place on 7 December 2016 and a second on 1 June 2017. Such evaluations are the result of monitoring exercises, based on notifications issued by civil society organisations and on a methodology that has been commonly agreed upon. This system permits an evaluation of how each platform treats a received request and whether it eventually leads to the removal of the content within the agreed timeframe.

The results of the third evaluation showed important progress made at different levels. Indeed, 70% of notified illegal hate speech is removed by the IT platforms, compared with 59% in the second evaluation and 28% in the first one. All IT companies have improved in that regard. Moreover, the agreed timeframe of twenty-four hours for reviewing notifications is respected in the majority of cases (81.7%), which is twice as much as in 2016 (40%). Reporting systems, transparency, staff of reviewers, and cooperation with civil society organisations have been ameliorated. Concerning transparency towards users, a positive trend has also been identified in respect of the fact that in 68.9% of the cases feedback is given to the notifying users. However, in that regard, Facebook and YouTube have only made minor improvements since the previous evaluation. Indeed, the former provided feedback in only 1.1% more cases (94.1% compared to 93.7% in 2017), while the latter increased the level of feedback given by only 0.1% (20.8 % compared to 20.7% in 2017). By contrast, Twitter made considerable progress as it went from giving feedback in 32.8% of cases in 2017 to 70.4 % (37.6% difference). Importantly, all IT companies treated differently notifications coming from “trusted” flaggers (originating from NGOs or public bodies) or general users. In the case of Facebook, however, these observed discrepancies were only minor (1.7%). Lastly, the most cited grounds for reporting hate speech were “ethnic origin” (17.1%), followed by “anti-Muslim” hatred (16.4%) and xenophobia (16%). Grounds such as race, religion or gender identity were only cited in a minority of cases (7.9%, 3.2% and 3.1%).

Having regard to these improvements, satisfaction was expressed by both Andrus Ansip, European Commission Vice-President for the Digital Single Market, and Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality. Indeed, the latter declared that “[t]he Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently”. However, as was shown by the evaluation, more attention still needs to be paid by IT companies in the area of transparency.”

European Commission, Code of Conduct on countering illegal hate speech online - Results of the 3rd monitoring exercise, 19 January 2018 EN
European Commission, Countering illegal hate speech online - Commission initiative shows continued improvement, further platforms join, 19 January 2018 EN