While the systemic brutality used by Colombian police to quell national protests in 2021 was real and is well documented, photos recently used by Amnesty International to highlight the issue were not.
The international human rights advocacy group has come under fire for posting images generated by artificial intelligence in order to promote their reports on social media – and has since removed them.
The images, including one of a woman being dragged away by police officers, depict the scenes during protests that swept across Colombia in 2021.
But any more than a momentary glance at the images reveals that something is off.
The faces of the protesters and police are smoothed-off and warped, giving the image a dystopian aura.
The tricolour carried by the protester has the right colours – red, yellow and blue – but in the wrong order, and the police uniform is outdated.
Amnesty and other observers have documented hundreds of cases of human rights abuses committed by Colombian police during the wave of unrest in 2021, among them violence, sexual harassment and torture.
Their research has raised awareness of the heavy-handedness of Colombian police and contributed to the growing acceptance of the need for reform.
But photojournalists and media scholars warned that the use of AI-generated images could undermine Amnesty’s own work and feed conspiracy theories.
“We are living in a highly polarised era full of fake news, which makes people question the credibility of the media. And as we know, artificial intelligence lies. What sort of credibility do you have when you start publishing images created by artificial intelligence?” said Juancho Torres, a photojournalist based in Bogotá.
At least 38 civilians were killed by state forces during 2021’s national strike, which was sparked by an unpopular tax reform and then fanned by the brutal police response.
In cases documented by Bogotá-based Temblores, women were abducted, taken to dark buildings, and raped by groups of policemen.
Amnesty International said it had used photographs in previous reports but chose to use the AI-generated images to protect protesters from possible state retribution.
To avoid misleading the public, the images included text stating that they were produced by AI.
“We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,” Erika Guevara Rosas, director for Americas at Amnesty, said.
“But we do take the criticism seriously and want to continue the engagement to ensure we understand better the implications and our role to address the ethical dilemmas posed by the use of such technology.”
Gareth Sella was blinded in his left eye when a police officer in Bogotá shot him with a rubber bullet at the protests. He argued that hiding the identity of protesters makes sense to protect them from ending up in jail on inflated charges.
“As the UN has documented, the state has continued pursuing protesters and more than 100 are in jail, many with disproportionate sentences, such as terrorism, to make an example of them. Hiding our identities seems sensible to me given that two years on we continue living in the fear that we could be jailed at any moment or even that they come after us on the streets,” Sella said.
AI-generated images stitch together photographs previously taken by humans to create new ones, raising questions of plagiarism and in photojournalism and the industry’s future.
Torres said Amnesty’s use of AI images was an insult to the photojournalists who cover protests from the frontline.
“The power for a journalist is to recreate reality and what they see – something which during the national strike, many reporters, photographers and cameramen risked their lives to do. I have a friend who lost an eye. Using AI images not only loses that reality, it loses the connection between journalists and people.”