Executive Summary, Key Conclusions and Recommendations, and Methodology
“Companies must do better to protect their users and the public interest.” — Ranking Digital Rights
In addition to the annual Platform Scorecard ratings below, this year’s Social Media Safety Index (SMSI) report provides a set of Key Findings and Recommendations as guidance for companies to improve LGBTQ social media safety, privacy, and expression across their platforms. The Scorecard employs 14 LGBTQ-specific indicators to evaluate policies and product features of six major platforms (TikTok, X, YouTube, and Meta’s Instagram, Facebook, and Threads), adapting the standard methodology of noted tech and human rights research organization Ranking Digital Rights (RDR). GLAAD urges platforms to review RDR’s extensive research and recommendations,[1] and exhorts all platforms and tech companies to prioritize the Key Findings and Recommendations of the SMSI.
Executive Summary of Platform Scorecard
The 2025 SMSI Platform Scorecard scores reflect the fact that some platforms have made product and policy improvements in the past year, while others have retreated from best practices in LGBTQ safety, privacy, and expression (some, drastically so). Overall the scores remain extremely low. The most significant changes this past year are the draconian rollbacks from Meta[2] and YouTube[3], particularly their retractions of policy protections for transgender and nonbinary people. In a positive change from last year’s evaluation, YouTube updated its advertising policies which now prohibit advertisers from excluding users from seeing ads based on their sexual orientation and gender identity.
Although the current state of LGBTQ social media safety may seem worse than ever, GLAAD continues to work with all platforms to improve their products, speaking out as a constant voice of advocacy urging all companies to protect LGBTQ people, and especially trans people, online.
An Important Contextual Note About the 2025 Platform Scorecard and Methodology Changes The 2025 scores are not directly comparable to the 2024 scores due to extensive revising of the 2025 Platform Scorecard methodology. In some cases, existing indicators and elements were revised to clarify our evaluation standards (e.g., for Q12 — language was revised to clarify that content moderator trainings should take place annually). We also added a new indicator and several elements addressing emerging threats to LGBTQ safety, privacy, and expression that have gained traction since we initially developed the Scorecard. These revisions to the methodology resulted in universal score declines across platforms. Therefore, year-to-year comparisons of the 2025 scores to previous scores will not be an accurate reflection of relative platform progress. Relevant policy changes for each of the platforms are discussed below.
The indicators of the Platform Scorecard are collectively measuring against the following LGBTQ safety, privacy, and expression best practices.
Every platform should have public-facing policies that: protect LGBTQ people from hate, harassment, and violence on the platform; prohibit targeted misgendering[4] and deadnaming[5] on the basis of gender identity; prohibit content promoting so-called “conversion therapy;”[6] prohibit advertising content that promotes hate, harassment, and violence against LGBTQ individuals on the basis of protected characteristics; explain the proactive steps it takes to stop demonetizing and/or wrongfully removing legitimate content and accounts related to LGBTQ topics and issues; and explain its internal structures to best ensure the fulfillment of its commitments to overall LGBTQ safety, privacy, and expression on the platform.
Gender Spectrum Collection
Companies should also provide users with a dedicated field to add and change gender pronouns on their user profiles; and explain what options users have to control or limit the company’s collection, inference, and use of data and information related to their sexual orientation and their gender identity.
Companies should state that: they do not recommend content to users based on their disclosed or inferred sexual orientation or gender identity, unless a user has proactively opted in; and that they do not allow third-party advertisers to target users with, or exclude them from, seeing content or advertising based on their disclosed or inferred sexual orientation or gender identity, unless the user has proactively opted in.
In the realm of transparency, every platform should regularly publish data about the actions it takes to restrict content and accounts that violate policies protecting LGBTQ people; and about the actions it takes to stop demonetizing and/or wrongfully removing legitimate content and accounts related to LGBTQ topics and issues.
Lastly, to create products that better serve all of its users, the company should make a public commitment to continuously diversify its workforce, and ensure accountability by annually publishing voluntarily self-disclosed data on the number of LGBTQ employees across all levels of the company.
The 2025 Platform Scorecard scores:
Executive Summary of Key Findings and Recommendations
All platforms evaluated in the Scorecard have (some[7]) policies prohibiting hate and harassment on the basis of sexual orientation, gender identity and/or expression, and other protected characteristics. Given the difficulty of assessing policy enforcement methodologically — which is further complicated by a lack of transparency from the companies — these failures are not quantified in the Scorecard scores. However, in GLAAD’s day-to-day research and monitoring, and in reports from other organizations, researchers, and journalists, failures are seen repeatedly in both the development of policies and in their enforcement across major platforms.[8]
The Key Findings and Recommendations bullet points below are drawn from GLAAD’s year-round work and research, to accompany the Platform Scorecard. The most notable highlight of the 2025 research is the pair of findings that: in addition to inadequate moderation of harmful anti-LGBTQ material (for example, see GLAAD’s 2024 report, Unsafe: Meta Fails to Moderate Extreme Anti-trans Hate Across Facebook, Instagram, and Threads), platforms also frequently over-moderate legitimate LGBTQ expression. This includes wrongful takedowns of LGBTQ accounts and creators,[9] mis-labeling of LGBTQ content as “adult”[10] or “explicit,” unwarranted demonetization of LGBTQ material,[11] shadowbanning,[12] and other kinds of suppression[13] of LGBTQ content.[14] (Such unwarranted restrictions occur with non-LGBTQ content as well.[15])
Gender Spectrum Collection
Additional LGBTQ safety, privacy, and expression issues include: lack of algorithmic transparency and harmful algorithms; inadequate transparency and user controls around data privacy; lack of transparency with regard to content moderation protocols including information about moderator trainings; apparent over-reliance on AI moderation without human review; failures to effectively moderate anti-LGBTQ content in many non-English languages; reductions in transparency tools and access for independent researchers; among other issues — all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to the harms of online hate, harassment, and discrimination.[16] These areas of concern are exacerbated for those who are members of multiple communities, including people of color, women, immigrants, people with disabilities, religious minorities, and more.[17] Social media platforms should be safe for everyone, in all of who we are.
As a US-based non-profit organization GLAAD’s focus is primarily domestic; however, there are enormous global implications of this work, and GLAAD calls upon platforms to take responsibility for the safety of their products worldwide.[18] Social media platforms are vitally important for LGBTQ people, as spaces where we connect, learn, and find community.[19] While there are many positive initiatives these companies have implemented to support and protect their LGBTQ users,[20] they simply must do more. Lastly, as GLAAD has long noted, proposed legislative social media safety solutions must be mindful of not censoring LGBTQ resources or causing unintended harm to LGBTQ users, especially LGBTQ youth.[21]
Key Findings
Recent hate speech policy rollbacks from Meta and YouTube present grave threats to safety and are harmful to LGBTQ people on these platforms.[22]
Platforms are largely failing to mitigate harmful anti-LGBTQ hate and disinformation that violates their own policies.[23]
Platforms disproportionately suppress LGBTQ content, via removal, demonetization, and forms of shadowbanning.[24]
Anti-LGBTQ rhetoric and disinformation on social media has been shown to lead to offline harms.[25]
Social media companies continue to withhold meaningful transparency about content moderation, algorithms, data protection, and data privacy practices.[26]
Key Recommendations
Strengthen and enforce (or restore) existing policies and mitigations that protect LGBTQ people and others from hate, harassment, and misinformation;[27] while also reducing suppression of legitimate LGBTQ expression.[28]
Improve moderation by providing mandatory training for all content moderators (including those employed by contractors) focused on LGBTQ safety, privacy, and expression; and moderate across all languages, cultural contexts, and regions.[29] AI systems should be used to flag for human review, not for automated removals.[30]
Work with independent researchers to provide meaningful transparency about content moderation, community guidelines, development and use of AI and algorithms, and enforcement reports.[31]
Respect data privacy. Platforms should reduce the amount of data they collect, infer, and retain,[32] and cease the practice of targeted surveillance advertising,[33] including the use of algorithmic content recommender systems,[34] and other incursions on user privacy.[35]
Promote and incentivize civil discourse including working with creators and proactively messaging expectations for user behavior, such as respecting platform hate and harassment policies.[36]
Methodology
For the Key Findings and Recommendations of the SMSI, GLAAD’s Social Media Safety (SMS) team reviewed research, journalism, and reports across the field of social media safety and platform accountability. The SMS team also consulted with the SMSI advisory committee and other organizations and leaders in technology and human rights. The past year’s developments in the field of LGBTQ social media safety have been tracked in the 2025 SMSI Articles and Reports Appendix. Please also refer to the 2024, 2023, 2022, and 2021 SMSI reports.
The 2025 Platform Scorecard methodology and research guidance from research analyst Andrea Hackl can be found here. The full detailed scoring sheets are available here.
Significant 2024-2025 Reports on LGBTQ Social Media Safety
In 2021, the inaugural GLAAD Social Media Safety Index report offered the first-of-its-kind dedicated analysis on LGBTQ safety and social media platforms. There are now many powerful reports and studies devoted to these issues, and some of the most significant of the past year are listed in our 2025 Appendix of Articles and Reports. We urge everyone, especially platform leadership and executives, to read the full reports
On the Firewall Between Financial Sponsorship and GLAAD’s Advocacy Work Several of the companies that own products and platforms listed in this report are current financial sponsors of GLAAD, a 501(c)3 non-profit. A firewall exists between GLAAD’s advocacy work and GLAAD’s sponsorships and fundraising. As part of our media advocacy and media watchdog work, GLAAD publicly calls attention to issues that are barriers to LGBTQ safety, as well as barriers to fair and accurate LGBTQ content and coverage — including issues originating from companies that are current financial sponsors.
Your gift allows us to track the impact of our work, helping us better understand the state of acceptance and address the gaps with advocacy — like pushing for more LGBTQ-user protections on social media.