The British media regulator, Ofcom, has recently urged online platforms to enhance the safety of their recommendation algorithms to protect children from harmful online content. This initiative is part of a draft proposal aimed at implementing the Online Safety Act, which the Conservative government approved in 2023. The Act is designed to limit children’s exposure to potentially damaging content such as pornography and material related to self-harm. Online intermediaries, including popular platforms like WhatsApp, Instagram, YouTube, Google, and Facebook, are the primary focus of these proposed regulations.
Ofcom’s call for action includes a requirement for platforms to configure their recommender systems in a way that filters out the most harmful content from children’s feeds. The regulator stipulates that any service with a recommendation system that poses a higher risk of harmful content exposure must proactively identify child users and adjust their algorithms accordingly. These measures are intended to reduce the visibility of harmful content and allow children to provide negative feedback on the content recommended to them.
Additionally, Ofcom plans to initiate further consultations later this year on how automated tools, including artificial intelligence (AI), can be employed to proactively detect illegal content. This is a part of broader efforts to utilize technology in safeguarding users against harmful online interactions. The consultation will likely address the technological and ethical dimensions of using AI to monitor and control online content effectively.
The implementation of these regulations has been met with mixed reactions from the tech industry. Companies like Apple and WhatsApp have expressed concerns, particularly regarding provisions that might require the deployment of “accredited technology” to identify content related to terrorism or child sexual exploitation and abuse. These measures have raised issues around privacy and the technical feasibility of such widespread content monitoring. Meanwhile, the urgency of implementing these safety measures has been underscored by criticisms from a UK parliamentary committee, which pointed out Ofcom’s previous lack of clarity in handling the data of nearly 100,000 service providers under the scope of the regulation, potentially delaying the rollout of these crucial protections.