AI censorship? ChatGPT bans 250k candidate images from creation
OpenAI’s ChatGPT rejects 250,000 requests for 2024 U.S. presidential candidate images
In a recent blog post, OpenAI revealed that its language model, ChatGPT, had turned down over a quarter of a million requests for generating images of the 2024 U.S. presidential candidates. The rejected image-generation requests involved prominent figures such as President-elect Donald Trump, Vice President Kamala Harris, and President Joe Biden, among others.
The surge in generative artificial intelligence technology has raised concerns about potential misinformation spread through deepfakes, especially in the context of the upcoming elections worldwide in 2024. Clarity, a machine learning firm, reported a 900% increase in deepfake content over the past year, some of which were attributed to Russian efforts to disrupt U.S. elections, as indicated by U.S. intelligence officials.
OpenAI’s proactive stance against misinformation was highlighted in its 54-page report from October, where the company disclosed thwarting more than 20 deceptive operations globally that sought to exploit its models for creating misleading content. Despite these attempts, none of the election-related activities managed to gain significant traction or “viral engagement,” as stated in the report.
The potential ramifications of misleading AI-generated content have not gone unnoticed by lawmakers, with concerns about accuracy and reliability remaining prevalent. Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, emphasized the importance of not relying on AI chatbots for election-related information due to the inherent risks associated with the technology.
As the use of large language models like ChatGPT continues to evolve, maintaining vigilance against the spread of misinformation and deepfakes remains crucial. OpenAI’s efforts to address deceptive activities and promote transparency in AI usage underscore the need for comprehensive measures to combat misinformation in the digital landscape.
The impact of AI-generated content on public discourse and electoral processes could shape future regulatory frameworks and information dissemination practices. With ongoing advancements in generative AI technology, ensuring the integrity of information dissemination and combating misinformation will remain key priorities for industry stakeholders and policymakers alike.
In conclusion, OpenAI’s proactive measures to combat misinformation through its language model highlight the growing challenges posed by AI-generated content in shaping public perception and discourse. As the digital landscape continues to evolve, addressing the risks associated with deepfakes and deceptive practices will require collaborative efforts from technology companies, regulators, and society at large to safeguard the integrity of information and democratic processes.