Meta will begin labeling a wider range of video, audio and image content as "Made with AI" starting in May.
Why it matters: Meta admits its current labeling policies are "too narrow" and that a stronger system is needed to deal with today's wider range of AI-generated content and other manipulated content, such as a January video which appeared to show President Biden inappropriately touching his granddaughter.
- The labels could be generated through self-disclosure when a user posts content, as a result of advice from fact-checkers, or via Meta detecting invisible markers of AI-made content.
Context: Meta's new policy is a response to feedback from its independent Oversight Board, which urged an update to the current policy.
- Present "manipulated media" rules apply only to "videos that are created or altered by AI to make a person appear to say something they didn't say."
- Starting in February, the company began to add "Imagined with AI" labels to photorealistic images made with the Meta AI feature.
What they're saying: "We'll keep this content on our platforms so we can add labels and context," Monika Bickert, vice president of content policy, wrote in a blog post, arguing that additional transparency is better than censoring content.
- But Meta will "remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy," Bickert wrote.