Since May 2023, the proliferation of websites hosting fabricated and misleading content generated by artificial intelligence (AI) has surged by more than 1,000%, according to a report by NewsGuard. The organization identified 603 AI-generated news and information sites operating with minimal human oversight, indicating a concerning rise in misinformation facilitated by generative AI tools.
These sites often have generic names that give the appearance of legitimate news outlets, creating challenges for consumers to discern between authentic and AI-generated content. The report emphasizes that the growth of AI-generated content poses a significant threat to accurate information dissemination, particularly in the lead-up to events like the 2024 US presidential election. With AI making it increasingly accessible for individuals, including intelligence agencies and less sophisticated actors, to create outlets for spreading propaganda or false narratives, the potential impact on public perception is a growing concern.
NewsGuard highlights that these websites often rely on programmatic advertising, with brands unintentionally supporting them through ad placement. The revenue model for such sites involves programmatic advertising, where ads are delivered without considering the nature or quality of the website. This creates an economic incentive for the widespread creation of these AI-generated content farms. NewsGuard urges brands to take steps to exclude untrustworthy sites from their advertising strategies to mitigate unintentional support for misinformation.
The surge in AI-generated misinformation poses challenges for news consumers, as many people may not have the tools to differentiate between authentic and fabricated content. The report advises readers to be vigilant for giveaways in the content, such as error messages or language specific to chatbot responses, indicating AI-generated material. While AI tools may become more common in legitimate newsrooms, NewsGuard emphasizes the importance of effective human oversight to maintain journalistic standards and avoid the mass production of articles that characterizes AI-generated misinformation sites.
In summary, the rapid rise of AI-generated websites disseminating misinformation underscores the challenges posed by generative AI tools and the potential impact on public discourse and perception. As these tools become more accessible, the need for effective measures to distinguish between trustworthy and untrustworthy sources becomes increasingly crucial.