United Kingdom

[GB] The House of Lords Communications and Digital Committee publishes its report on AI large language model

IRIS 2024-3:1/18

Alexandros K. Antoniou

University of Essex

On 2 February 2024, the House of Lords Communications and Digital Committee (a Lords Select Committee that considers the media, digital and creative industries) published its inquiry report on large language models (LLMs) and generative AI. The Committee forecasts AI development trends over the next three years, contrasting them with the regulatory stance outlined in the government’s March 2023 AI White Paper. It criticises the government’s disproportionate emphasis on AI safety, warning of missed opportunities. Priority recommendations highlighted in the report include support for innovation, robust regulatory supervision, proactive risk mitigation, and copyright protection.

More specifically, the Committee’s report covers a broad range of topics regarding the future impact, regulation, innovation, and ethical considerations of LLMs and generative AI. It highlighted that LLMs were projected to have transformative impacts akin to the invention of the internet, advising the UK to brace for a period of “heightened technological turbulence” (para. 28) to leverage opportunities effectively.

Ensuring equitable market competition is paramount for the flourishing of businesses in the dynamic LLM sector. Mid-tier enterprises stand to gain from leveraging a combination of open and closed-source technologies. The government was advised to prioritise fair market competition as a policy objective, refraining from favouring open or closed models disproportionately, and to collaborate with regulatory bodies to oversee competition in foundation models. To mitigate the risk of regulatory capture, bolstered governance measures were recommended, including red teaming, increased training to enhance expertise and soliciting external feedback in policy formulation processes (para. 49).

The Committee underlined that LLMs possess significant potential to benefit the economy and society, emphasising the importance of responsible development and deployment (para. 65). Recognising the advantages, it urged managing disruptions in the labour market and mitigating digital exclusion. It cautioned that the government ought to strike a better balance between innovation and risk, avoiding an overly narrow focus on high-stakes AI safety (para. 80).

Regarding risk management, the Committee noted that LLMs pose security concerns by facilitating existing malicious activities rather than introducing entirely new risks. The government, in collaboration with the industry, should swiftly scale up existing cyber security measures. Despite advancements in understanding AI risks and global cooperation, the absence of a standardised assessment framework impedes a more accurate evaluation of the magnitude of these risks. Aligning an AI risk taxonomy with the National Security Risk Assessment was advised. While catastrophic risks within three years were deemed improbable and “apocalyptic concerns about threats to human existence [were] exaggerated (para. 23), monitoring next-generation capabilities and fostering responsible development remain crucial (paras. 140-141). Societal risks, including discrimination and bias, require robust mitigation strategies (para. 161), and clarity on data protection laws regarding LLM processes is imperative.

Moreover, the Committee recommended that the UK carve its own path in AI regulation, avoiding direct emulation of EU, US, or Chinese models, fostering technology diplomacy, and serving as a global example. Although international regulatory alignment and cooperation are vital, challenges and delays are anticipated. Extensive legislation solely targeting LLMs was deemed premature due to the technology's novelty and uncertainties: “the technology is too new, the uncertainties too high and the risk of inadvertently stifling innovation too great” (para. 187). Instead, the priority should be to establish strategic guidance for LLMs and swiftly implement adaptable regulatory frameworks conducive to innovation.

The report critiqued the slow pace of implementing the AI White Paper’s proposals, stressing the importance of empowering existing regulators with standardised powers and resources for the success of AI governance initiatives. In addition, the Committee stressed the importance of respecting copyright laws and treating rightsholders fairly in the development and use of LLMs. Despite the complexity of applying copyright law to LLM processes, the fundamental principles remain clear: to reward creators, prevent unauthorised use of works, and foster innovation. The Committee took the view that the current legal framework was inadequate in achieving these goals (para. 246). If uncertainty regarding copyright protections persists, the government was urged to consider updating legislation to ensure it remains technologically neutral and future-proof. Measures such as empowering creators, transparency in data usage, and promoting good practice through collaboration with licensing agencies and data repository owners were recommended to safeguard copyright principles.

Of note, an Intellectual Property Office (IPO) working group convened earlier in June 2023 to establish and formalise best practices for the use of copyright, performance, and database material in AI applications, including data mining. Despite initial plans for a legislative solution (which were withdrawn in March 2023), progress towards a voluntary code proved challenging. In its response to the AI White Paper consultation, the government confirmed that the UK IPO was unable to establish a voluntary code of practice between AI developers and rights holders regarding the use of copyrighted materials for AI training (CP 1019, para. 29). The House of Lords Committee’s recommendation to return the process to the Government if no code is produced is therefore timely and the publication of its report may provide an additional impetus to reach a resolution.

Overall, the Committee underlined the need for a balanced and proactive approach to managing the development and deployment of LLMs, ensuring they maximise societal benefits while mitigating associated risks. It called for strategic investment in innovation, clear and adaptable regulatory frameworks, and international cooperation to responsibly navigate the intricate landscape of AI development. The UK government has two months to respond to the report.


References




This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.