An AI-generated image of an explosion at the Pentagon in the United States has been widely circulated on Twitter after being shared by verified accounts.
The Twitter account, @BloombergFeed, which shared the fake image, was verified, but unaffiliated with Bloomberg News. It has reportedly been banned since the tweet went viral.
The account tweeted: “Large explosion near the Pentagon complex in Washington D.C. – initial report,” with the attached picture showing a large cloud of smoke.
Although the building in the picture did not look like the Pentagon, it appeared that it was enough to deceive Twitter users at first glance.
The Pentagon is the common name for the headquarters of the United States Department of Defence. It is located in Arlington, Virginia, just across the Potomac river from Washington, D.C. The Pentagon is a large, five-sided building and serves as the command centre for the US military.
Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p
— Andy Campbell (@AndyBCampbell) May 22, 2023
Russian state-media Twitter account RT, which has three million followers, and Twitter account OSINTdefender, which has more than 300,000 followers, were among the verified accounts that shared the image.
Arlington Fire and Emergency Medical Services later confirmed that there hadn’t been an explosion near the Pentagon.
The department tweeted that it was “aware of a social media report circulating online about an explosion near the Pentagon”.
It added: “There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”
The viral tweet has highlighted concerns about both AI-generated images and Twitter’s new Blue subscription service, and the consequences of these issues occurring in conjunction.
The fake tweet had a real-life impact, with CNN reporting that it had resulted in a dip in the stock market.
AI-generated images have circulated online in recent months.
In March, fake AI-generated images of former US president Donald Trump being arrested were shared on social media.
Donald Trump has been arrested downtown Washington DC pic.twitter.com/81su1uCsBB
— erén✰ 🎯 (@erenfromtarget) March 21, 2023
Twitter’s additional context box that appears under misleading tweets clarified: “Images of Donald Trump’s arrest circulating on Twitter are fake. They are AI-generated and have no factual basis. For more information, see Ars Technica and Forbes.”
Earlier this month, Amnesty backtracked after sharing photos from a protest in Colombia that had been edited using AI.
The human rights watchdog said it had artificially edited the images to protect the identities of activists who sometimes face retaliation from authoritarian regimes.
Amid the rise of AI, Google has announced two features to help people establish which pictures are real and which are artificially generated.
Google will soon inform users about when it first indexed an image and similar pictures, where a picture first appeared online, and where else it’s been published.
In April, Twitter began removing legacy verified checkmarks from hundreds of thousands of accounts.
The only way to remain or become verified was to sign up for Elon Musk’s Twitter Blue subscription service.
Soon after the subscription service was launched, accounts impersonating celebrities and companies —which boasted the blue tick that implies an account is legitimate — began appearing.
Accounts with blue check marks impersonating former politicians Tony Blair and George Bush went viral for their parody tweets. The issue also sparked concerns that scammers could take advantage of the blue checks to pose as customer service accounts.
The virality of the fake Pentagon image is the latest instance to raise concerns about the current blue check system.