United Kingdom

[GB] Department for Digital, Culture, Media and Sport publishes its final report on disinformation and fake news

IRIS 2019-4:1/22

Julian Wilkins

Wordley Partnership and Q Chambers

On 18 February 2019, the Department for Digital, Culture, Media and Sport (DCMS) Committee published its Final Report on disinformation and fake news (“the Report”), following on from its July 2018 Interim Report. Since then, the Committee has held three further evidence sessions, inviting UK regulators and the Government to give evidence, as well as receiving a further 23 written submissions. In November 2018 the Committee hosted an “International Grand Committee”, inviting parliamentarians from nine countries.

The Report develops the areas covered in the Interim Report, including the definition, role and legal liabilities of social media platforms; data misuse and targeting in respect of the Facebook, Cambridge Analytica and Aggregate IQ allegations (including evidence about Facebook’s knowledge of and participation in data-sharing); and Russia’s influence on overseas elections.

The Report reiterates its interim position that social media companies are not simply platforms, with no responsibility for the content of their respective sites. The Committee recommends the creation of a new category of platform for tech companies that are not necessarily either “platforms” or “publishers”, and recommends that clear legal liability be established in order that such tech companies can act against harmful or illegal content on their sites.

The Committee supports the establishment of independent regulation, including a compulsory Code of Ethics - overseen by an independent regulator - setting out what constitutes harmful content. The independent regulator would have statutory powers to monitor relevant online tech companies, and this would create a regulatory system for online content that would be as effective as that for offline-content industries.

The Code of Ethics should be similar to the Broadcasting Code issued by Ofcom - which is based on the guidelines established in section 319 of the 2003 Communications Act. The Code of Ethics should be developed by technical experts and overseen by the independent regulator, in order to set down in writing what is and what is not acceptable on social media. This should include harmful and illegal content that has been referred to the companies for removal by their users or that should have been easy for tech companies themselves to identify.

The same public body should have statutory powers to obtain any information from social media companies that are relevant to its enquiries and to initiate legal proceedings against them in the event that they fail to meet their obligations under the Code and do not act against the distribution of harmful and illegal content. This body should also have access to tech companies’ security mechanisms and algorithms in order to ensure that they are operating responsibly. This public body should be accessible to the public and be able to take up complaints from members of the public about social media companies.

In addition, the Report recommends that “inferred data” - which relate to details about an online user that are not based on specific information that he or she has shared but on an analysis of his or her data profile -be as strongly protected by law as personal information.

In addition, the Report supports the Interim Report’s recommendation that a levy be placed on tech companies operating in the UK in order to support the enhanced work of the Information Commissioner’s Office (ICO), as well as to fund the work of the new independent system and regulation.

Concerning data use and data targeting, the Committee looks back at the Cambridge Analytica scandal, which was prompted by Facebook’s policies and its decision to override its users’ privacy settings in order to transfer their data to other parties. In that regard, the Report supports the Interim Report’s view that the dominance of a handful of powerful tech companies has resulted in their behaving as if they were monopolies in their own areas and that there are considerations regarding the data on which those services are based. The Report considers that the Government should consider the impact of such monopolies on the political world and on democracy. In particular, the Committee recommends that the Competition and Market Authority conduct a comprehensive audit of the operation of the advertising market on social media.

As regards advertising and political campaigning, the Report repeats the recommendation contained in the Interim Report that the Government look at the ways in which the UK law should define “digital campaigning”, including reaching agreed definitions of what constitutes online political advertising. There also needs to be, according to the Committee, an acknowledgement of the role and power of unpaid campaigns and Facebook Groups that influence elections and referendums. The Committee considers in that regard that the Government should review the regulations on political work during and after elections and referendums. Among other things, it recommends that the Government clarify what “political advertising” is and what advertising is sponsored. It also insists that the ICO’s proposal that a Code of Practice highlighting the use of personal information in political campaigning and applying to all data controllers who process personal data for the purpose of political campaigning should be underpinned by primary legislation. Furthermore, the Report points out that tech companies must address the issue of shell companies and other professional attempts to hide the identity of purchasers of advertisements - especially political advertisements. As part of the drive for advertising transparency, there should be a full disclosure of targeting methods employed.

The Committee also examined the issue of foreign influence in political campaigns and makes a series of related recommendations.

Concerning digital literacy, the Report recommends that it become one of the main pillars of education and that social media platforms develop online tools to help social media users distinguish between quality journalism and articles from less reliable or reputable sources.

The Report notes that the Government has accepted the interim report’s recommendations and that, instead of using the term “fake news”, it will use the term “disinformation” to describe the deliberate creation and sharing of false and or manipulated information that is intended to deceive and mislead the audience, either for the purpose of causing harm or for political, personal or financial gain. The term “misinformation” shall be used to refer to the inadvertent sharing of false information.

The use of algorithms by social media companies can foster bias and so the Centre for Data Ethics and Innovation, set up in 2017, is dedicated to advise on how to enable and ensure ethical, safe and innovative uses of artificial intelligence.


References


This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.