Meta has made significant changes to its content moderation policies in preparation for the second Trump administration. Along with scrapping fact-checking, the social media giant has relaxed rules around hate speech and abuse, particularly concerning sexual orientation, gender identity, and immigration status. These alterations have raised concerns among advocates for vulnerable groups who fear that reducing content moderation could result in real-world harm.
Meta's CEO, Mark Zuckerberg, stated that the company will no longer restrict topics like immigration and gender, citing recent elections as a driving force behind the decision. Notably, Meta has updated its community standards to allow allegations of mental illness or abnormality based on gender or sexual orientation, under the guise of political and religious discourse.
While certain slurs and harmful stereotypes remain prohibited, Meta has removed a sentence from its policy rationale that linked hate speech to offline violence. Critics argue that these policy changes are aimed at aligning with the incoming administration and cutting costs related to content moderation, potentially leading to increased hate speech and disinformation.
Experts warn that the shift in harmful content policies, which now rely on user reports rather than proactive enforcement, could have detrimental effects. By focusing automated systems on severe violations like terrorism and child exploitation, Meta may overlook issues such as self-harm, bullying, and harassment until after harm has been done.
Concerns are also raised about Meta's lack of transparency regarding the impact of these changes, particularly on teenagers. The company's reluctance to disclose the harms experienced by youth and resistance to legislative measures aimed at safeguarding users have sparked apprehension among online safety advocates.