France

[FR] Protection of minors online: AI and Digital Council recommendations to go further than age control

IRIS 2026-4:1/9

Amélie Blocman

Légipresse

At a time when the draft law aimed at protecting minors from the risks associated with using social networks is soon due to be examined by the joint committee, a note from the AI and Digital Council (Conseil de l'IA et du numérique - CIANum), published on 30 March, looks more specifically at the age control system that has been the subject of debate in parliament.

It looks at the many risks to which minors are exposed on social networks, as well as current/planned regulations and their limitations. In this respect, the Council notes an international movement to regulate digital services in order to improve the protection of minors online. In Australia, for example, a ban on minors' access to social networks has been in force since December 2025, via the Online Safety Amendment (Social Media Minimum Age) Act 2024, which has prohibited 10 platforms from accepting users under the age of 16, leading to the deletion of millions of accounts. Other countries (United Kingdom, China, etc.) are using alternatives such as device-level age verification, mandatory disconnection times, secure digital interfaces, etc.

The European Commission recently validated France's jurisdiction to ban social networking sites for under-15s, while specifying that it was up to the European body to impose additional obligations on very large platforms. The European Union is also experimenting with several methods of age verification, while leaving member states some leeway in defining the age threshold.

However, according to the AI Council, "age verification will not be enough to address all the pitfalls, or to guarantee a peaceful digital life for all users". Indeed, many blind spots remain: the growing use of generative AI, better consideration of the fundamental rights of minors, structural changes to mitigate the effects of the platform business model based on the attention economy, etc.

Rather than simply regulating by age, the AI Council is advocating a thorough rethink of the framework for digital services, and is making a series of recommendations along these lines. It calls for the creation of a European standard for the protection of minors online, the opening up of platform functionalities, the enshrinement of a right to parameterisation and greater transparency in algorithms.

Given the inadequacy of the "technically neutral role" criterion adopted by the Law on Confidence in the Digital Economy, the CIANum calls for a rethink of the dichotomy between host and publisher. The Digital Services Act (DSA), without formally creating a third status, distinguishes online platforms from very large platforms and very large search engines as hosts, imposing on them a more stringent liability regime, particularly in terms of moderation. Rethinking the status of these players would make it possible to move towards a system of shared liability – for example in the event of the dissemination of illegal content. According to the CIANum, such a reform can only be considered at the European level. The revision of the DSA scheduled for 2027 could be the right time to introduce such a change.

The note also calls for consideration to be given to the multiple uses of digital services, in particular those relating to generative AI. In this respect, the CIANum will lead the "Generative AI and vulnerabilities" commission, whose initial conclusions will be made public by the end of May 2026. The final report will be submitted in September 2026.

Finally, in view of the wide range of digital tools and the many risks to which minors are exposed, the CIANum's report recommends strengthening and structuring digital, media and information literacy.


References


This article has been published in IRIS Legal Observations of the European Audiovisual Observatory.