OpenAI, the organization behind ChatGPT, is taking proactive measures against disinformation ahead of major elections this year, covering countries with half the world’s population and 60% of global GDP. The World Economic Forum’s “Global Risks Report 2024” warned about the potential disruption caused by generative artificial intelligence tools in spreading false information. OpenAI aims to combat disinformation by introducing tools and, notably, has decided to ban the use of its technology, including ChatGPT and image generator DALL-E 3, for political campaigns. The company is focused on safeguarding the election process and preventing the misuse of its technology to undermine democratic processes.
OpenAI acknowledges the potential risk of its tools being used for personalized persuasion and, until more is understood, restricts the creation of applications for political campaigning and lobbying. In preparation for the elections, OpenAI has assembled expertise from various teams, including safety systems, threat intelligence, legal, engineering, and policy. The organization anticipates challenges such as misleading deepfakes, chatbots impersonating candidates, and scaled influence operations. To address these concerns, OpenAI is developing tools to attribute reliable information to text generated by ChatGPT and provide users with the ability to detect images created using DALL-E 3.
The company is committed to blocking applications that hinder democratic participation by spreading disinformation about voting eligibility. ChatGPT, for instance, guides users to authoritative websites when asked about voting locations, contributing to the fight against voter suppression. OpenAI’s actions align with its dedication to collaboration and vigilance in protecting the integrity of elections, reflecting the broader goal of ensuring responsible and ethical use of its AI technologies.