Meta has implemented a new policy regarding content generated using AI. Under this policy, content creators are required to self-declare if their audio, video, or image content was created using generative AI. Additionally, Meta will utilize “industry standard AI image indicators” when users upload content to its platforms.
The decision follows a recommendation from the Meta Oversight Board, which highlighted the need to update the policy for manipulated media. Previously, Meta’s policy addressed content manipulated to depict individuals saying or doing things they hadn’t done. However, with advancements in generative AI, there is now equal importance placed on addressing manipulation that depicts individuals falsely.
Effective from July, Meta will refrain from removing deepfake videos unless they violate other community standards. The Oversight Board argued that this approach balances the need to prevent deception while safeguarding freedom of expression. By leaving up AI-generated content and labeling it accordingly, users are provided with more context to assess the content’s authenticity.
Major tech companies like Google and YouTube have also taken steps to combat misinformation by introducing features to identify manipulated content. For instance, Google introduced an “about this image” feature, while YouTube offers a self-labeling mechanism for creators to declare AI-generated material. Additionally, industry initiatives like the Coalition for Content Provenance and Authenticity have developed standards, such as the C2PA, to provide transparency about the origins of visual and audio content.