OpenAI’s Decision on Election-Related Imagery
In a significant move, ChatGPT declined over 250,000 requests to generate images of candidates in the upcoming US elections using their advanced artificial intelligence (AI) platform. OpenAI, the organization behind the AI chatbot, shared insights in a blog update on Friday regarding their platform DALL-E, which is designed for image and video creation. The requests to create images related to prominent figures such as president-elect Donald Trump, his vice-presidential choice JD Vance, current president Joe Biden, democratic candidate Kamala Harris, and her running mate Tim Walz were all rejected.
The rationale behind these refusals stems from the “safety measures” that OpenAI implemented in the lead-up to election day. According to the blog post, these measures are crucial in an electoral context and form an integral part of OpenAI’s broader initiative to prevent their technology from being misused for deceptive or harmful purposes.
Related
- OpenAI launches ChatGPT-powered search engine, putting it in competition with Google
The teams at OpenAI stated that they “have not seen evidence” of any significant influence operations related to the US elections going viral via their platforms. The blog further elaborated on an incident from August, where the company successfully thwarted an Iranian influence campaign dubbed Storm-2035, which attempted to generate articles about US political affairs while masquerading as both conservative and progressive news outlets. Consequently, accounts associated with Storm-2035 were banned from accessing OpenAI’s platforms.
In an update released in October, OpenAI revealed that they had disrupted over “20 operations and deceptive networks” from various parts of the globe that attempted to leverage their platforms for misleading purposes. Notably, the networks associated with US election-related activities that were identified did not manage to generate any “viral engagement,” as indicated in the report.