Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

AI Firms Join to Combat Child Exploitation

April 24, 2024
Reading Time: 3 mins read
in News
AI Firms Join to Combat Child Exploitation

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta, and others have pledged to prevent their AI technologies from being used to create or distribute child sexual abuse material (CSAM). This commitment comes as part of an initiative led by child-safety group Thorn and the nonprofit All Tech Is Human, which focuses on responsible technology use. The initiative sets a new industry standard aimed at combating the exploitation of children, especially as generative AI technologies continue to advance. According to Thorn, more than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, highlighting the urgent need for preventative measures.

Thorn and All Tech Is Human recently released a paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse,” which outlines various strategies and recommendations for AI developers. The paper urges companies to be meticulous in choosing the datasets used to train AI models, advising them to avoid datasets that contain not only CSAM but also adult sexual content. This precaution is necessary because of the propensity of generative AI to inadvertently combine concepts from different datasets, potentially leading to the creation of inappropriate material.

The paper also emphasizes the need for social media platforms and search engines to proactively remove links to websites and apps that allow the alteration of images to depict nudity in children. This proactive approach aims to prevent the creation and spread of new AI-generated CSAM online. A significant concern raised by Thorn is the “haystack problem,” where an influx of AI-generated CSAM makes it increasingly difficult for law enforcement agencies to identify genuine victims amidst a vast amount of content.

Rebecca Portnoff, Thorn’s vice president of data science, expressed in an interview with the Wall Street Journal that the goal of the initiative is to significantly mitigate the harms associated with AI technology in the context of child exploitation. She emphasized that the technology sector does not have to resign itself to the adverse effects of AI but can actively steer the course of AI development to safeguard vulnerable populations. Some companies have already started implementing changes, such as segregating data involving children from datasets containing adult content and introducing watermarks to AI-generated images, although these solutions are not foolproof as watermarks can be removed.

Reference:
  • AI Giants Unite to Block AI Use in Child Exploitation
Tags: April 2024child sexual abuse materialCSAMCyber NewsCyber News 2024Cyber threatsCybersecurityGoogleMetaMicrosoftOpenAI
ADVERTISEMENT

Related Posts

Russia Arrests Young Cybersecurity Leader

Russia Arrests Young Cybersecurity Leader

November 28, 2025
FBI Reports 262 Million In Fraud

Poland Detains Russian Hacking Suspect

November 28, 2025
Russia Arrests Young Cybersecurity Leader

UK Privacy Enforcement Activity Drops

November 28, 2025
FBI Reports 262 Million In Fraud

AI Security Firm Vijil Raises 17 Million

November 27, 2025
Openai User Data Exposed In Mixpanel Hack

Amazon Uses AI Agents For Bug Hunting

November 27, 2025
Openai User Data Exposed In Mixpanel Hack

Openai User Data Exposed In Mixpanel Hack

November 27, 2025

Latest Alerts

Bloody Wolf Widens Java RAT Campaign

Forge Library Patch Stops Signature Bypass

ShadowV2 Botnet Tests During AWS Outage

Toddycat Tools Steal Outlook And M365 Data

Hackers Use Blender Assets To Spread StealC

ASUS Flags Critical AiCloud Router Flaw

Subscribe to our newsletter

    Latest Incidents

    Mazda Reports No Impact From Oracle Hack

    Asahi Breach Hits Two Million Users

    Qilin Ransomware Hits Korean MSP

    Multiple London Councils Hit By Cyber Attacks

    Russian Hackers Target US Engineering Firm

    Situsamc Confirms Customer Data Breach

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial