Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta, and others have pledged to prevent their AI technologies from being used to create or distribute child sexual abuse material (CSAM). This commitment comes as part of an initiative led by child-safety group Thorn and the nonprofit All Tech Is Human, which focuses on responsible technology use. The initiative sets a new industry standard aimed at combating the exploitation of children, especially as generative AI technologies continue to advance. According to Thorn, more than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, highlighting the urgent need for preventative measures.
Thorn and All Tech Is Human recently released a paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse,” which outlines various strategies and recommendations for AI developers. The paper urges companies to be meticulous in choosing the datasets used to train AI models, advising them to avoid datasets that contain not only CSAM but also adult sexual content. This precaution is necessary because of the propensity of generative AI to inadvertently combine concepts from different datasets, potentially leading to the creation of inappropriate material.
The paper also emphasizes the need for social media platforms and search engines to proactively remove links to websites and apps that allow the alteration of images to depict nudity in children. This proactive approach aims to prevent the creation and spread of new AI-generated CSAM online. A significant concern raised by Thorn is the “haystack problem,” where an influx of AI-generated CSAM makes it increasingly difficult for law enforcement agencies to identify genuine victims amidst a vast amount of content.
Rebecca Portnoff, Thorn’s vice president of data science, expressed in an interview with the Wall Street Journal that the goal of the initiative is to significantly mitigate the harms associated with AI technology in the context of child exploitation. She emphasized that the technology sector does not have to resign itself to the adverse effects of AI but can actively steer the course of AI development to safeguard vulnerable populations. Some companies have already started implementing changes, such as segregating data involving children from datasets containing adult content and introducing watermarks to AI-generated images, although these solutions are not foolproof as watermarks can be removed.