It wasn㽶Ƶֱt just that Meta scrapped from its platforms as it prepares for the second Trump administration. The social media giant has also loosened its rules around hate speech and abuse 㽶Ƶֱ again following the lead of Elon Musk㽶Ƶֱs X 㽶Ƶֱ specifically when it comes to sexual orientation and gender identity as well as immigration status.
The changes are worrying advocates for vulnerable groups, who say Meta㽶Ƶֱs decision to scale back content moderation could lead to real-word harms. Meta CEO Mark Zuckerberg said Tuesday that the company will 㽶Ƶֱremove restrictions on topics like immigration and gender that are out of touch with mainstream discourse,㽶Ƶֱ citing 㽶Ƶֱrecent elections㽶Ƶֱ as a catalyst.
For instance, Meta has added the following to its rules 㽶Ƶֱ called 㽶Ƶֱ that users are asked to abide by:
㽶ƵֱWe do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like 㽶Ƶֱweird.㽶Ƶֱ㽶Ƶֱ In other words, it is now permitted to call gay people mentally ill on Facebook, Threads and Instagram. Other slurs and what Meta calls 㽶Ƶֱharmful stereotypes historically linked to intimidation㽶Ƶֱ 㽶Ƶֱ such as Blackface and Holocaust denial 㽶Ƶֱ are still prohibited.
The Menlo Park, California-based company also removed a sentence from its 㽶Ƶֱpolicy rationale㽶Ƶֱ explaining why it bans certain hateful conduct. The now-deleted sentence said that hate speech 㽶Ƶֱcreates an environment of intimidation and exclusion, and in some cases may promote offline violence.㽶Ƶֱ
㽶ƵֱThe policy change is a tactic to earn favor with the incoming administration while also reducing business costs related to content moderation,㽶Ƶֱ said Ben Leiner, a lecturer at the University of Virginia㽶Ƶֱs Darden School of Business who studies political and technology trends. 㽶ƵֱThis decision will lead to real-world harm, not only in the United States where there has been an uptick in hate speech and disinformation on social media platforms, but also abroad where disinformation on Facebook has accelerated ethnic conflict in .㽶Ƶֱ
Meta, in fact, that it didn㽶Ƶֱt do enough to prevent its platform from being used to 㽶Ƶֱincite offline violence㽶Ƶֱ in Myanmar, fueling communal hatred and violence against the country㽶Ƶֱs Muslim Rohingya minority.
, a former engineering director at Meta known for his expertise on curbing online harassment, said while most of the attention has gone to the company㽶Ƶֱs fact-checking announcement Tuesday, he is more worried about the changes to Meta㽶Ƶֱs harmful content policies.
That㽶Ƶֱs because instead of proactively enforcing rules against things like self-harm, bullying and harassment, Meta will now rely on user reports before it takes any action. The company said it plans to focus its automated systems on 㽶Ƶֱtackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.㽶Ƶֱ
Béjar said that㽶Ƶֱs even though 㽶ƵֱMeta knows that by the time a report is submitted and reviewed the content will have done most of its harm.㽶Ƶֱ
㽶ƵֱI shudder to think what these changes will mean for our youth, Meta is abdicating their responsibility to safety, and we won㽶Ƶֱt the impact of these changes because Meta refuses to be transparent about the harms teenagers experience, and they go to extraordinary lengths to dilute or stop legislation that could help,㽶Ƶֱ he said.