Content warning: This post contains examples of targeted hate speech and harassment, and references to violence and self-harm. |
On January 7, 2025, Meta announced sweeping rollbacks to its content moderation policies across Facebook, Instagram, and Threads — ending third-party fact-checking in the U.S. and weakening its hate-speech policies worldwide. Additionally, the company announced that it would halt “proactive” enforcement of some policies on harmful content, notably hate speech. As noted in GLAAD’s 2025 Social Media Safety Index, the rollbacks include new exceptions expressly allowing anti-LGBTQ hate speech, such as stating that LGBTQ people are “abnormal” and “mentally ill,” as well as Meta’s own use of anti-LGBTQ language (referring to LGBTQ people using the terms “homosexuality” and “transgenderism” in its updated hate speech policy).
In the absence of data from Meta itself, GLAAD partnered with UltraViolet and All Out to survey more than 7,000 active users from 86 countries — focusing on people who Meta defines as belonging to protected characteristic groups — to understand how these policy shifts have affected their experience online. The findings are both stark and deeply concerning: since the rollbacks, users report a sharp rise in hateful content, increased self-censorship, and a pervasive sense of vulnerability. The survey is part of a larger campaign, called Make Meta Safe.
Read the report
Survey Methodology
We conducted a mixed-methods survey in English and in Portuguese, Spanish, German, Italian, and French, reaching individuals targeted by hate on the basis of protected characteristics (i.e. race, gender, sexual orientation, gender identity, disability, religion, national origin, and serious disease). Recruitment was done organically — via email and social-media outreach through co-sponsoring organizations — to ensure that our sample reflected communities most at risk. After cleaning for duplicates, respondents shared quantitative ratings and qualitative testimony about their experiences since January 2025.
Key Findings at a Glance
- 1 in 6 respondents report being the victim of some type of gender-based or sexual violence on Meta platforms.
- 92% are concerned about harmful content increasing since the rollbacks.
- 72% see more hate targeting protected groups.
- 92% feel less protected from being exposed to or targeted by harmful content.
- Over 25% say they have been targeted directly with hate or harassment.
- 66% have witnessed harmful content in their feeds.
- 77% feel less safe expressing themselves freely on Meta platforms.
Hate on the Rise
When asked if harmful content had increased, 75% of LGBTQ respondents, 76% of women, and 78% of people of color said “yes.” One user, who is trans and nonbinary, reported, “Violence against me has skyrocketed since January. I live in daily fear.” Another shared:
“I rarely see friends’ posts now — my feed is filled with obscene manipulated images, commercial ads, and transphobic, sexist, violent comments, even under kitten videos. Death threats are not removed, even when reported.”
Erosion of Safety and Free Expression
Ninety-two percent of all respondents say they feel less protected from being exposed to or targeted by harmful content. Among LGBTQ people, notably transgender people, there are stories of targeted harassment:
“I recently saw someone state they wished all transgender people would die by suicide — 41% to become 100%. When I told them how awful that was, they called me a transphobic slur.”
“One night I reported at least 10 comments directly inciting violence towards the LGBT community. Facebook responded within less than a minute saying that the comments were investigated and they didn’t see anything wrong, and [they] kept the comments up.”
“I recently posted information about my transition and someone responded with a picture of a noose.”
A full 77% say they now feel less safe expressing themselves following the policy changes. One respondent noted:
“There are times when I am afraid to comment on a post because of the violence expressed by others in their [comments].”
Gender-Based and Sexual Violence Online
Alarmingly, 27% of LGBTQ respondents and 35% of people of color report that they have been the direct targets of gender-based or sexual violence online. Examples include doxxing, stalking, threats of physical harm, and rape threats. As one survivor of digital stalking put it:
“Weaponizing technology to threaten, harass, and silence me has transformed my online existence into a battleground of fear.”
Global Impact, Local Harms
While the majority of respondents are from the U.S., U.K., and Canada, voices from the Global South underscore that unchecked hate online can — and does — translate to violence offline. In Colombia, users spoke of renewed attacks on trans people in the wake of Sara Millerey González’s murder, whose brutal killing was filmed and circulated on social media. Where LGBTQ lives are already marginalized or criminalized, these policy rollbacks risk endangering us even more.
Why This Matters
Meta produces quarterly reports on the prevalence of harmful content and content labeled as false by fact-checkers. In its most recent report published last month, the company stated that, from January – April 2025, “violating content largely remained unchanged for most problem areas.”
But it’s important to note: those numbers are based solely on internal data and remain opaque to outside scrutiny. Our survey centers the lived experiences of users themselves, revealing that weakened policies have not led to, as Meta claims, “more speech and fewer mistakes” — but rather to a more hostile environment for those already most vulnerable.
A Call to Action
Everyone deserves online spaces where they can connect, communicate, and organize without fear of harassment, threats, or dehumanization. GLAAD, UltraViolet, and All Out therefore urge Meta to:
- Commission an independent third-party review of the impact of the January 2025 policy changes, centering user experiences.
- Reinstate robust hate-speech protections for all historically marginalized groups, including LGBTQ people.
- Restore third-party fact-checking and proactive enforcement mechanisms globally.
- Engage civil-society stakeholders in future policy deliberations, ensuring that human-rights perspectives shape content-moderation standards.
The data is clear: since Meta’s draconian rollbacks, harmful content has surged, user safety has plummeted, and freedom of expression for marginalized communities hangs in the balance. We call on Mark Zuckerberg and Meta’s leadership to reverse course — restoring the guardrails that make social media a place for community and expression, not only for the broad range of marginalized communities who are so directly experiencing these harms, but for everyone.
Read the full survey report and detailed findings at makemetasafe.org.