Meta, the parent company of major social networks like Facebook and Instagram, has revealed its strategy to combat disinformation, particularly focusing on AI-generated content, in preparation for the upcoming EU Parliament elections. With the increasing use of large language models (LLMs) to spread misinformation, Meta acknowledges the need for robust measures. The World Economic Forum has identified misinformation and disinformation as top global risks, especially with more than 50 countries holding national elections in 2024.
To address this, Meta is establishing an Elections Operations Center, manned by experts to identify and mitigate potential threats in real-time. The company plans to expand its network of fact-checking organizations across the EU, covering content in over 26 languages. Notably, Meta aims to tackle AI-generated content designed to deceive by identifying, labeling, removing, or down-ranking such content. This includes labeling AI-generated images and building tools to assess content from other sources.
Meta’s Head of EU Affairs, Marco Pancini, highlighted their commitment to removing serious misinformation that could lead to imminent harm or suppress voting. They also label debunked content and reduce its distribution, demonstrating a proactive approach to combating deceptive content. Furthermore, Meta will review AI-generated content with the help of fact-checkers and implement appropriate measures, emphasizing collaboration with other industry players for common standards and guidelines.