Today, GLAAD has announced the findings of its fifth annual Social Media Safety Index (SMSI), an annual report on LGBTQ safety, privacy, and expression online.
The in-depth report analyzes six major social media platforms — TikTok, YouTube, X, and Meta’s Facebook, Instagram, and Threads — across 14 indicators that address a range of issues affecting LGBTQ people online including data privacy, content moderation, workforce diversity, and more. All companies are failing to meet basic standards across most safety metrics on the SMSI scorecard.
Read the full report.
Offering a quantitative ranking and also highlighting LGBTQ safety policy rollbacks from major platforms, the report is a wake-up call for tech leaders and employees at major platforms and for all of Silicon Valley.
As the report shows, rollbacks from some companies eerily mirror recent changes to federal websites and communications that were implemented by the Trump administration from Project 2025 (notably, Project 2025 calls for “deleting the terms sexual orientation and gender identity.”)
In January, YouTube quietly removed “gender identity” from the list of protected characteristics in its hate speech policy, and Meta removed major components of its hate speech policy protections for LGBTQ people, including adding language expressly stating it is now allowed to refer to LGBTQ people as “abnormal” and “mentally ill.” In its policy revision, Meta also uses the terms “homosexuality” and “transgenderism” — a well-known, right-wing anti-trans trope — in reference to LGBTQ people.
Although YouTube claims that “our hate speech policies haven’t changed,” it is an objective fact that, sometime between January 29th and February 6th, the company removed “gender identity and expression” from its list of protected characteristics (the change is visible here in the current and archived policy page). The SMSI report calls out this unprecedented break from best practices in the field of trust and safety, stating that: “YouTube should reverse this dangerous policy change and update its Hate Speech policy to expressly include gender identity and expression as a protected characteristic.”
These shifts — among others — undermine the safety of LGBTQ people and other historically marginalized groups, who are uniquely vulnerable to hate, harassment, and discrimination online and off.
All companies are failing to meet basic standards across most safety metrics on the SMSI scorecard. Out of a possible 100, the platforms received the following scores:
“At a time when real-world violence and harassment against LGBTQ people is on the rise, social media companies are profiting from the flames of anti-LGBTQ hate instead of ensuring the basic safety of LGBTQ users,” said GLAAD President and CEO Sarah Kate Ellis. “These low scores should terrify anyone who cares about creating safer, more inclusive online spaces.”
The quantitative methodology of GLAAD’s Platform Scorecard was created in partnership with Ranking Digital Rights (RDR) and research consultant Andrea Hackl. This year, GLAAD introduced a new scoring methodology that generated numeric ratings for each platform with regard to LGBTQ safety, privacy, and expression. The Platform Scorecard focuses on the existence of policies, and does not measure the enforcement of those policies. The 2025 scores are not directly comparable to the 2024 scores due to extensive revising of the Scorecard methodology.

The Social Media Safety Index includes specific findings and recommendations for each company, and calls on companies to urgently and tangibly prioritize LGBTQ safety. The report also highlights the volume of online anti-trans hate, harassment, and disinformation that has skyrocketed in the past year, a trend that GLAAD has qualitatively examined in the SMSI report.
Alongside these rollbacks in LGBTQ protections online, GLAAD’s Anti-LGBTQ Extremism Reporting Tracker has shown a distinct upwards trend in offline anti-LGBTQ incidents in recent years. These include both criminal and non-criminal instances of harassment, vandalism, and assault motivated by anti-LGBTQ hate.
Social media platforms are vitally important for LGBTQ people, as spaces where we connect, learn, and find community. Although today’s social media landscape does indeed look dire, it is heartening that some platforms have implemented positive initiatives in the past year. For Pride month in 2024, TikTok created their LGBTQIA+ TikTok Visionary Voices List and YouTube offered the Celebrate Pride on YouTube LGBTQ creators spotlight. In this current moment, as LGBTQ people face unprecedented attacks on our civil and human rights, now is the time for all companies to stand up for inclusive values and provide LGBTQ communities with the safety protections we need, and the celebratory and affirming messages we deserve.
Key Findings of the 2025 SMSI include:
- Recent hate speech policy rollbacks from Meta and YouTube present grave threats to safety and are harmful to LGBTQ people on these platforms.
- Platforms are largely failing to mitigate harmful anti-LGBTQ hate and disinformation that violates their own policies.
- Platforms disproportionately suppress LGBTQ content, via removal, demonetization, and forms of shadowbanning.
- Anti-LGBTQ rhetoric and disinformation on social media has been shown to lead to offline harms.
- Social media companies continue to withhold meaningful transparency about content moderation, algorithms, data protection, and data privacy practices.
GLAAD’s Key Recommendations:
- Strengthen and enforce (or restore) existing policies and mitigations that protect LGBTQ people and others from hate, harassment, and misinformation; while also reducing suppression of legitimate LGBTQ expression.
- Improve moderation by providing mandatory training for all content moderators (including those employed by contractors) focused on LGBTQ safety, privacy, and expression; and moderate across all languages, cultural contexts, and regions. AI systems should be used to flag for human review, not for automated removals.
- Work with independent researchers to provide meaningful transparency about content moderation, community guidelines, development and use of AI and algorithms, and enforcement reports.
- Respect data privacy. Platforms should reduce the amount of data they collect, infer, and retain, and cease the practice of targeted surveillance advertising, including the use of algorithmic content recommender systems, and other incursions on user privacy.
- Promote and incentivize civil discourse including working with creators and proactively messaging expectations for user behavior, such as respecting platform hate and harassment policies.

GLAAD’s SMSI Platform Scorecard draws on RDR’s standard methodology to produce numeric ratings for each platform with regard to LGBTQ safety. This year, GLAAD added elements addressing emerging threats to LGBTQ people online as well as an indicator regarding content that promotes so-called “conversion therapy,” a practice that has been banned in 23 states and condemned by all major medical, psychiatric, and psychological organizations.
GLAAD and other monitoring organizations repeatedly encounter failures in enforcement of a company’s own guidelines for content moderation, including hate speech and harassment policies.
Specific LGBTQ safety, privacy, and expression issues identified in the SMSI include: Inadequate content moderation and problems with policy development and enforcement (including failure to mitigate anti-LGBTQ content and over-moderation of LGBTQ users); harmful algorithms and lack of algorithmic transparency; inadequate transparency and user controls around data privacy; an overall lack of transparency and accountability across the industry, among many other issues — all of which disproportionately impact LGBTQ people and other marginalized communities.
“We need to hold the line — as tech companies are taking unprecedented leaps backwards, we remain firm in advocating for basic best practices that protect the safety of LGBTQ people on these platforms,” said GLAAD’s Senior Director of Social Media Safety Jenni Olson. “This is not normal. Our communities deserve to live in a world that does not generate or profit off of hate.”
Read the full report.
About the GLAAD Social Media Safety Program
As the leading national LGBTQ media advocacy organization, GLAAD is working every day to hold tech companies and social media platforms accountable and to secure safe online spaces for LGBTQ people. The GLAAD Social Media Safety Program produces the highly-respected annual Social Media Safety Index (SMSI) and researches, monitors, and reports on a variety of issues facing LGBTQ social media users — with a focus on safety, privacy, and expression.