Leading global human rights organization Amnesty International is defending its choice to use an AI image generator to depict protests and police brutality in Colombia. Amnesty told Gizmodo it used an AI generator to depict human rights abuses so as to preserve the anonymity of vulnerable protestors. Experts fear, however, that the use of the tech could undermine the credibility of advocacy groups already besieged by authoritarian governments that cast doubt on the authenticity of real footage.
Amnesty International’s Norway regional account posted three images in a tweet thread over the weekend acknowledging the two-year anniversary of a major protest in Colombia where police brutalized protestors and committed “grave human rights violations,” the organization wrote. One image depicts a crowd of armor-clad police officers, another features an officer with a red splotch over his face. Another image shows a protestor being violently hauled away by police. The images, each of which feature their own clear telltale artifacts of AI-generated images also have a small note on the bottom left corner saying: “Illustrations produced by artificial intelligence.”
Commenters reacted negatively to the images, with many expressing unease over Amensty’s use of a technology most often associated with oddball art and memes to depict human rights abuses. Amnesty pushed back, telling Gizmodo it opted to use AI in order to depict the events “without endangering anyone who was present.” Amnesty claims it consulted with partner organizations in Colombia and ultimately decided to use the tech as a privacy-preserving alternative to showing real protestors’ faces.
“Many people who participated in the National Strike covered their faces because they were afraid of being subjected to repression and stigmatization by state security forces,” an Amnesty spokesperson said in an email. “Those who did show their faces are still at risk and some are being criminalized by the Colombian authorities.”
Amnesty went on to say the AI-generated images were a necessary substitute to illustrate the event since many of the cites rights abuses allegedly occurred under the cover of darkness after Colombian security forces cut off electricity access. The spokesperson said the organization added the disclaimer on the bottom of the image noting they were created using AI in an attempt to avoid misleading anyone.
“We believe that if Amnesty International had used the real faces of those who took part in the protests it would have put them at risk of reprisal,” the spokesperson added.
Critics say rights abusers could use AI images to discredit authentic claims
Critical human rights experts speaking with Gizmodo fired back at Amnesty, claiming the use of generative AI could set a troubling precedent and further undermine the credibility of human rights advocates. Sam Gregory, who leads WITNESS, a global human rights network focused on video use, said the Amnesty AI images did more harm than good.
“We’ve spent the last five years talking to 100s of activists and journalists and others globally who already face delegitimization of their images and videos under claims that they are faked,” Gregory told Gizmodo. Increasingly, Gregory said, authoritarian leaders try to bury a piece of audio or video footage depicting a human rights violation by immediately claiming it’s deepfaked.
“This puts all the pressure on the journalists and human rights defenders to ‘prove real’,” Gregory said. “This can occur preemptively too, with governments priming it so that if a piece of compromising footage comes out, they can claim they said there was going to be ‘fake footage.”
Gregory acknowledged the importance of anonymizing individuals depicted in human rights media but said there are many other ways to effectively present abuses without resorting to AI image generators or “tapping into media hype cycles.” Media scholar and author Roland Meyer agreed and said Amnesty’s use of AI could actually “devalue” the work done by reporters and photographers who have documented abuses in Colombia.
A potentially dangerous precedent
Amnesty told Gizmodo it doesn’t currently have any policies for or against using AI-generated images though a spokesperson said the organization’s leaders are aware of the possibility of misuse and try to use the tech sparingly.
“We currently only use it when it is in the interest of protecting human rights defenders,” the spokesperson said. “Amnesty International is aware of the risk to misinformation if this tool is used in the wrong way.”
Gregory said any rule or policy Amnesty does implement regarding the use of AI could prove significant because it could quickly set a precedent others will follow.
“It’s important to think about the role of big global human rights organizations in terms of setting standards and using tools in this way that doesn’t have collateral harms to smaller, local groups who face much more extreme pressures and are targeted repeatedly by their governments to discredit them, Gregory said.
Trending Products