Get all your news in one place.
100’s of premium titles.
One app.
Start reading
PC Gamer
PC Gamer
James Bentley

Meta-funded regulator for AI disinformation on Meta's platform comes under fire: 'You are not any sort of check and balance, you are merely a bit of PR spin'

CHONGQING, CHINA - OCTOBER 30: In this photo illustration - The Facebook app page is displayed on a smartphone in the Apple App Store in front of the Meta Platforms, inc. logo on October 30, 2024 in Chongqing, China. (Photo by Cheng Xin/Getty Images).

Just a few years ago it was easy to spot, at first glance, that an AI image wasn't real. Edges of items blended together, proportions didn't quite feel right, people had too many fingers, it never got cats right… yet it's now reaching the point that it can be harder to tell. Coming up to the US elections, TechCrunch hosted a talk with AI experts on AI disinformation (misinformation that has direct intent and malice) and Meta's self-regulation policies saw itself in the firing line.

This conversation around disinformation ended up on Meta's practices because Pamela San Martín, Co-chair of the Oversight Board for Meta, was one of the key speakers.

The Oversight, according to its own FAQ, "is a body of experts from around the world that exercises independent judgment and makes binding decisions on what content should be allowed on Facebook and Instagram".

However, just a few questions down the page, it declares that the board is funded directly by Meta, with $280 million in funding over the last five years alone. This declaration of independence, when paired with the knowledge of funding, implies a tension that the other members of the panel picked up.

San Martín, whilst acknowledging the problems of AI and Meta's own need to learn from it, praised AI as a tool for battling AI misinformation.

"Most social media content is moderated by automation and automation uses AI, either to flag certain content to be reviewed by humans, or to flag certain content to be actioned."

Off the back of this, she also suggested that the best way to combat disinformation isn't always to remove it, but sometimes to inform or label it correctly. Think of the X community notes function and you have a good idea of what that looks like. She also noted that public reports of disinformation are mostly a good tool for public figures and information, and do little to dissuade harm to private individuals.

She saw pushback when talking about regulation, specifically self-regulation of boards.

"Regulation is necessary. I'm very concerned when it's speech-related, but I'm completely for regulation when it has to do with transparency and accountability" San Martín told the group.

Brandie Nonnecke, the founding director of the CITRIS Policy Lab, responded to this claim with "I don't think these transparency reports really do anything".

The argument here is that, with the amount of AI disinformation out there, a report can show thousands of actioned examples without giving a broader understanding of what disinformation is left untouched. They can give "a false sense that they are actually doing due diligence". When those reports are created internally, it can also be hard to judge the intent of the report.

Imran Ahmed, the CEO of the Center for Countering Digital Hate(CCDC), is also critical of not only Meta but Meta's Oversight Board and its incentives.

"Self-regulation is not regulation, because the oversight board itself cannot answer the five fundamental questions you should always ask someone who has power. What power do you have, who gave you that power, in whose interests do you wield that power, to whom are you accountable, and how do we get rid of you if you're not doing a good job? If the answer to every single one of those questions is (Meta) you are not any sort of check and balance, you are merely a bit of PR spin."

This is an important point when talking about regulation, and one San Martín rebuffed by noting that she cannot be fired by Meta for her reports.

AI, explained
(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

As noted by session moderator, Kyle Wiggers, the Meta oversight board saw layoffs in April this year, though she assures the panel the funding for her work is overseen by a trust which has been given money from Meta irrevocably.

It can choose to not extend the terms of employees in this fund so, while they can't be fired, they can stop getting funding, and this touches on some of the weariness around transparency reports and self-regulation.

Meta's approach to AI has seen wide distrust, as can be shown by the Goodbye Meta AI chain mail, and self-regulation may not be the best way to tackle the misuse of AI.

Nonnecke suggests that transparency reports can, ironically, obfuscate the problems the report intends to tackle and questioning Meta's incentives to regulate itself feels like a necessary step in gaining a more intelligent and safe approach to AI on its platforms.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.