A tech watchdog has issued a warning about the potential impact of top AI generators on the upcoming presidential election. The Center for Countering Digital Hate, a tech watchdog group, conducted a study that revealed how photos can be manipulated to create fake election-related images using AI technology.
Researchers from the Center for Countering Digital Hate tested four leading AI generators - Mid Journey, Stability AI's Dream Studio, OpenAI's ChatGPT+, and Microsoft's Image Creator - by inputting prompts related to the presidential election. Shockingly, the study found that in 41% of the tests, these AI tools were able to produce potentially misleading photorealistic images.
Some of the fake images created during the study included a photo of President Biden appearing sick in a hospital bed, boxes of ballots in a dumpster, and a picture of former President Donald Trump being arrested. These findings raise serious concerns about the spread of misinformation and the potential impact on the integrity of the electoral process.
Despite the existence of safeguards on various platforms to prevent the creation of misleading AI-generated content, the study revealed that enforcing these policies remains a significant challenge. Many tech companies have policies in place to prohibit the dissemination of political misinformation, but the implementation and enforcement of these rules are often inadequate.
The researchers emphasized the urgent need for AI companies to enhance their detection mechanisms to identify and prevent users from creating deceptive images. They highlighted the importance of addressing so-called 'jailbreak tactics' that users employ to circumvent platform rules and regulations.
The implications of AI-generated misinformation extend beyond the upcoming US election in 2024, with the potential to impact elections globally. Recent reports have also surfaced about internet users creating AI-generated images to manipulate voter perceptions, underscoring the urgency of addressing this issue.