Meta recently addressed a major issue with its Instagram platform after users were exposed to violent and graphic content on their Reels feeds. The company confirmed that an error in its system caused some users to be recommended content that should not have appeared. The disturbing content ranged from street fights and school shootings to body horror and murder. Meta issued an apology, acknowledging the mistake and assuring users that it had taken immediate action to fix the problem.
The issue sparked strong reactions from users on social media, with many sharing their disturbing experiences. One Reddit user described a sudden shift in their Instagram feed, where light-hearted content such as cats and miniature paintings was replaced by horrific videos. Another user reported being exposed to violent content, including a live execution, and expressed anger toward Meta’s inability to prevent such exposure.
Some even vowed to leave the platform for good, citing a traumatic experience of seeing explicit content upon opening the app.
While Meta allows some level of graphic content, its guidelines are clear about what is prohibited, including videos depicting extreme violence, dismemberment, and the suffering of humans and animals. The flood of disturbing content on Instagram Reels appears to have violated these standards, leading to widespread criticism of the platform’s failure to safeguard its users.
Many pointed to artificial intelligence and recent policy changes, including Meta’s massive layoffs, as potential factors that contributed to the issue.
This incident comes amid significant changes at Meta, including the elimination of fact-checkers, removal of diversity and inclusion initiatives, and a relaxation of restrictions on political content. Meta has also reduced its workforce by over 21,000 employees in recent years, with many cuts affecting the teams responsible for maintaining trust and safety. As Meta faces mounting scrutiny over its handling of graphic content, the company will need to reassess its policies and actions to restore user trust.
Reference: