Join GLAAD and take action for acceptance.

    2025 Social Media Safety Index

    Executive Summary, Key Conclusions and Recommendations, and Methodology

    “Companies must do better to protect their users and the public interest.”  — Ranking Digital Rights

    In addition to the annual Platform Scorecard ratings below, this year’s Social Media Safety Index (SMSI) report provides a set of Key Findings and Recommendations as guidance for companies to improve LGBTQ social media safety, privacy, and expression across their platforms. The Scorecard employs 14 LGBTQ-specific indicators to evaluate policies and product features of six major platforms (TikTok, X, YouTube, and Meta’s Instagram, Facebook, and Threads), adapting the standard methodology of noted tech and human rights research organization Ranking Digital Rights (RDR). GLAAD urges platforms to review RDR’s extensive research and recommendations,[1] and exhorts all platforms and tech companies to prioritize the Key Findings and Recommendations of the SMSI.

    Executive Summary of Platform Scorecard

    The 2025 SMSI Platform Scorecard scores reflect the fact that some platforms have made product and policy improvements in the past year, while others have retreated from best practices in LGBTQ safety, privacy, and expression (some, drastically so). Overall the scores remain extremely low. The most significant changes this past year are the draconian rollbacks from Meta[2] and YouTube[3], particularly their retractions of policy protections for transgender and nonbinary people. In a positive change from last year’s evaluation, YouTube updated its advertising policies which now prohibit advertisers from excluding users from seeing ads based on their sexual orientation and gender identity.

    Although the current state of LGBTQ social media safety may seem worse than ever, GLAAD continues to work with all platforms to improve their products, speaking out as a constant voice of advocacy urging all companies to protect LGBTQ people, and especially trans people, online.

    An Important Contextual Note About the 2025 Platform Scorecard and Methodology Changes
    The 2025 scores are not directly comparable to the 2024 scores due to extensive revising of the 2025 Platform Scorecard methodology. In some cases, existing indicators and elements were revised to clarify our evaluation standards (e.g., for Q12 — language was revised to clarify that content moderator trainings should take place annually). We also added a new indicator and several elements addressing emerging threats to LGBTQ safety, privacy, and expression that have gained traction since we initially developed the Scorecard. These revisions to the methodology resulted in universal score declines across platforms. Therefore, year-to-year comparisons of the 2025 scores to previous scores will not be an accurate reflection of relative platform progress. Relevant policy changes for each of the platforms are discussed below.   

    The indicators of the Platform Scorecard are collectively measuring against the following LGBTQ safety, privacy, and expression best practices.

    Every platform should have public-facing policies that: protect LGBTQ people from hate, harassment, and violence on the platform; prohibit targeted misgendering[4] and deadnaming[5] on the basis of gender identity; prohibit content promoting so-called “conversion therapy;”[6] prohibit advertising content that promotes hate, harassment, and violence against LGBTQ individuals on the basis of protected characteristics; explain the proactive steps it takes to stop demonetizing and/or wrongfully removing legitimate content and accounts related to LGBTQ topics and issues; and explain its internal structures to best ensure the fulfillment of its commitments to overall LGBTQ safety, privacy, and expression on the platform.

    A non-binary person using a laptop at work
    Gender Spectrum Collection

    Companies should also provide users with a dedicated field to add and change gender pronouns on their user profiles; and explain what options users have to control or limit the company’s collection, inference, and use of data and information related to their sexual orientation and their gender identity.

    Companies should state that: they do not recommend content to users based on their disclosed or inferred sexual orientation or gender identity, unless a user has proactively opted in; and that they do not allow third-party advertisers to target users with, or exclude them from, seeing content or advertising based on their disclosed or inferred sexual orientation or gender identity, unless the user has proactively opted in.

    In the realm of transparency, every platform should regularly publish data about the actions it takes to restrict content and accounts that violate policies protecting LGBTQ people; and about the actions it takes to stop demonetizing and/or wrongfully removing legitimate content and accounts related to LGBTQ topics and issues.

    Lastly, to create products that better serve all of its users, the company should make a public commitment to continuously diversify its workforce, and ensure accountability by annually publishing voluntarily self-disclosed data on the number of LGBTQ employees across all levels of the company.

    The 2025 Platform Scorecard scores:

    2025 Social Media Safety Index Platform Scores

    Executive Summary of Key Findings and Recommendations

    All platforms evaluated in the Scorecard have (some[7]) policies prohibiting hate and harassment on the basis of sexual orientation, gender identity and/or expression, and other protected characteristics. Given the difficulty of assessing policy enforcement methodologically — which is further complicated by a lack of transparency from the companies — these failures are not quantified in the Scorecard scores. However, in GLAAD’s day-to-day research and monitoring, and in reports from other organizations, researchers, and journalists, failures are seen repeatedly in both the development of policies and in their enforcement across major platforms.[8]

    The Key Findings and Recommendations bullet points below are drawn from GLAAD’s year-round work and research, to accompany the Platform Scorecard. The most notable highlight of the 2025 research is the pair of findings that: in addition to inadequate moderation of harmful anti-LGBTQ material (for example, see GLAAD’s 2024 report, Unsafe: Meta Fails to Moderate Extreme Anti-trans Hate Across Facebook, Instagram, and Threads), platforms also frequently over-moderate legitimate LGBTQ expression. This includes wrongful takedowns of LGBTQ accounts and creators,[9] mis-labeling of LGBTQ content as “adult”[10] or “explicit,” unwarranted demonetization of LGBTQ material,[11] shadowbanning,[12] and other kinds of suppression[13] of LGBTQ content.[14] (Such unwarranted restrictions occur with non-LGBTQ content as well.[15])

    A non-binary trans woman using a cellphone in an apartment
    Gender Spectrum Collection

    Additional LGBTQ safety, privacy, and expression issues include: lack of algorithmic transparency and harmful algorithms; inadequate transparency and user controls around data privacy; lack of transparency with regard to content moderation protocols including information about moderator trainings; apparent over-reliance on AI moderation without human review; failures to effectively moderate anti-LGBTQ content in many non-English languages; reductions in transparency tools and access for independent researchers; among other issues — all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to the harms of online hate, harassment, and discrimination.[16] These areas of concern are exacerbated for those who are members of multiple communities, including people of color, women, immigrants, people with disabilities, religious minorities, and more.[17] Social media platforms should be safe for everyone, in all of who we are.

    As a US-based non-profit organization GLAAD’s focus is primarily domestic; however, there are enormous global implications of this work, and GLAAD calls upon platforms to take responsibility for the safety of their products worldwide.[18] Social media platforms are vitally important for LGBTQ people, as spaces where we connect, learn, and find community.[19] While there are many positive initiatives these companies have implemented to support and protect their LGBTQ users,[20] they simply must do more. Lastly, as GLAAD has long noted, proposed legislative social media safety solutions must be mindful of not censoring LGBTQ resources or causing unintended harm to LGBTQ users, especially LGBTQ youth.[21]

    Key Findings

    • Recent hate speech policy rollbacks from Meta and YouTube present grave threats to safety and are harmful to LGBTQ people on these platforms.[22]
    • Platforms are largely failing to mitigate harmful anti-LGBTQ hate and disinformation that violates their own policies.[23]
    • Platforms disproportionately suppress LGBTQ content, via removal, demonetization, and forms of shadowbanning.[24]
    • Anti-LGBTQ rhetoric and disinformation on social media has been shown to lead to offline harms.[25]
    • Social media companies continue to withhold meaningful transparency about content moderation, algorithms, data protection, and data privacy practices.[26]

    Key Recommendations

    • Strengthen and enforce (or restore) existing policies and mitigations that protect LGBTQ people and others from hate, harassment, and misinformation;[27] while also reducing suppression of legitimate LGBTQ expression.[28]
    • Improve moderation by providing mandatory training for all content moderators (including those employed by contractors) focused on LGBTQ safety, privacy, and expression; and moderate across all languages, cultural contexts, and regions.[29] AI systems should be used to flag for human review, not for automated removals.[30]
    • Work with independent researchers to provide meaningful transparency about content moderation, community guidelines, development and use of AI and algorithms, and enforcement reports.[31]
    • Respect data privacy. Platforms should reduce the amount of data they collect, infer, and retain,[32] and cease the practice of targeted surveillance advertising,[33] including the use of algorithmic content recommender systems,[34] and other incursions on user privacy.[35]
    • Promote and incentivize civil discourse including working with creators and proactively messaging expectations for user behavior, such as respecting platform hate and harassment policies.[36]

    Methodology

    For the Key Findings and Recommendations of the SMSI, GLAAD’s Social Media Safety (SMS) team reviewed research, journalism, and reports across the field of social media safety and platform accountability. The SMS team also consulted with the SMSI advisory committee and other organizations and leaders in technology and human rights. The past year’s developments in the field of LGBTQ social media safety have been tracked in the 2025 SMSI Articles and Reports Appendix. Please also refer to the 2024, 2023, 2022, and 2021 SMSI reports.

    The 2025 Platform Scorecard methodology and research guidance from research analyst Andrea Hackl can be found here. The full detailed scoring sheets are available here.

    Significant 2024-2025 Reports on LGBTQ Social Media Safety

    In 2021, the inaugural GLAAD Social Media Safety Index report offered the first-of-its-kind dedicated analysis on LGBTQ safety and social media platforms. There are now many powerful reports and studies devoted to these issues, and some of the most significant of the past year are listed in our 2025 Appendix of Articles and Reports. We urge everyone, especially platform leadership and executives, to read the full reports

    On the Firewall Between Financial Sponsorship and GLAAD’s Advocacy Work
    Several of the companies that own products and platforms listed in this report are current financial sponsors of GLAAD, a 501(c)3 non-profit. A firewall exists between GLAAD’s advocacy work and GLAAD’s sponsorships and fundraising. As part of our media advocacy and media watchdog work, GLAAD publicly calls attention to issues that are barriers to LGBTQ safety, as well as barriers to fair and accurate LGBTQ content and coverage — including issues originating from companies that are current financial sponsors.

    Footnotes

    [1] The RDR Index offers a robust evaluation of 14 of the world’s most powerful digital platforms, looking at more than 300 aspects of company policies, including indicators on: Which companies commit to human rights? Who discloses the most about how they moderate content? Which have the safest data privacy policies and practices? And much more.
    [2] Ina Fried, “Meta’s new policies open gate to hate,” Axios, January 9, 2025.
    [3] Taylor Lorenz, “YouTube removes ‘gender identity’ from hate speech policy,” User Mag, April 3, 2025.
    [4] Targeted misgendering is a form of hate speech that involves the intentional use of the wrong gender and/or gender pronouns when referring or speaking to a transgender, nonbinary, or gender non-conforming person. Source: ​​GLAAD
    [5] Targeted deadnaming is a form of hate speech whereby a person intentionally “reveal[s] a transgender person’s former name without their consent – often referred to as ‘deadnaming’ – [which] is an invasion of privacy that undermines the trans person’s true authentic identity, and can put them at risk for discrimination, even violence.” Source: GLAAD
    [6] “Conversion therapy” is a widely condemned practice that involves any psychological or religious intervention aimed at changing an LGBTQ person’s sexual orientation, gender identity, or gender expression. Complicating efforts to address the amplification of harmful “conversion therapy” content online, its purveyors also promote this dangerous practice under alternate labels such as “leaving homosexuality” and “unwanted same-sex attraction.” Sources: GLAAD; Global Project Against Hate and Extremism (GPAHE)
    [7] Meta’s January 2025 changes to its Hateful Conduct policy eviscerated many protections for LGBTQ users. GLAAD’s overview of these policy rollbacks can be found here.
    [8] Clint Rainey, “The Hate Speech Landscape on Facebook Is Worse than You Thought,” Fast Company, August 31, 2024.
    [9] Tricia Crimmins, “A Casting Call for Trans Actors Caused Instagram to Suspend Several Accounts for ‘Human Exploitation,’” The Daily Dot, July 11, 2024.
    [10] Nick Wolny, “The Pink Shadow Ban: How LGBTQ+ Influencers Are Fighting Censorship,” Out Magazine, July 1, 2024.
    [11] Sara Kingsley et al., “‘give Everybody [..] A Little Bit More Equity’: Content Creator Perspectives and Responses to the Algorithmic Demonetization of Content Associated with Disadvantaged Groups,” Proceedings of the ACM on Human-Computer Interaction 6, no. CSCW2 (November 7, 2022): 1–37.
    [12] Tatum Hunter, “What Is Shadowbanning? Why Social Media May Be Hiding Your Posts. – The Washington Post,” The Washington Post, October 16, 2024.
    [13] Taylor Lorenz, “Instagram Blocked Teens from Searching LGBTQ-Related Content for Months,” User Mag, January 6, 2025.
    [14] Devin Coldewey, “Oversight Board Presses Meta to Revise ‘convoluted and Poorly Defined’ Nudity Policy,” TechCrunch, January 17, 2023.
    [15] Tomas Apodaca and Natasha Uzcátegui-Liggett, “Demoted, Deleted, and Denied: There’s More than Just Shadowbanning on Instagram – the Markup,” The Markup, February 25, 2024.
    [16] Rachel Keighley, “Hate Hurts: Exploring the Impact of Online Hate on LGBTQ+ Young People,” Women & Criminal Justice 32, no. 1–2 (October 17, 2021): 29–48.
    [17] Aditya Vashistha et al., “Vulnerable, Victimized, and Objectified”: Understanding Ableist Hate and Harassment Experienced by Disabled Content Creators on Social Media, 2024.
    [18] Jaishree Kumar, “Queer Indians Confront Online Hate While Tech Platforms Stay Indifferent,” BOOM, June 26, 2024.
    [19] Linda Charmaraman, J. Maya Hernandez, and Rachel Hodes, “Marginalized and Understudied Populations Using Digital Media,” Handbook of Adolescent Digital Media Use and Mental Health, July 14, 2022, 188–214.
    [20]#ForYourPride: Celebrating TikTok’s Visionary LGBTQIA+ Community,” TikTok Newsroom, May 31, 2024.
    [21]Regulatory Oversight – 2024 Social Media Safety Index,” GLAAD, May 21, 2024.
    [22]More Transparency and Less Spin – Center for Countering Digital Hate: CCDH,” Center for Countering Digital Hate | CCDH, February 24, 2025.
    [23] Shelby Jamerson, “Media Matters and GLAAD Found 100 Meta Posts Containing Anti-Trans Slur,” Media Matters for America, August 28, 2024.
    [24] Sophia Chen, “The Lost Data: How AI Systems Censor LGBTQ+ Content in the Name of Safety,” Nature News, September 24, 2024.
    [25] Archie Macfarlane, “Online Anti-LGBTQ+ Hate and Its Offline Consequences – Tech against Terrorism,” Tech Against Terrorism, July 27, 2024.
    [26] Jordan Kraemer, “On Social Media, Transparency Reporting Is Anything but Transparent,” Tech Policy Press, February 15, 2024.
    [27] Alice Hunsberger, “A Guide to Protecting LGBTQ+ Users,” Everything in Moderation*, June 3, 2024.
    [28]  Tatum Hunter, “What Is Shadowbanning? Why Social Media May Be Hiding Your Posts. – The Washington Post,” The Washington Post, October 16, 2024.
    [29] Alice Hunsberger, “The Importance of Anti-Bias Training in Content Moderation,” Partner Hero, 2024.
    [30]  Sophia Khatsenkova, “The EU Tells Twitter to Hire More Human Content Moderators amid Concerns of Rise of Illegal Content,” Euronews, September 3, 2023.
    [31] Matt Motyl et al., “Making Social Media Safer Requires Meaningful Transparency,” Tech Policy Press, October 2, 2024.
    [32] Adam Marshall Writer, “Meta Fined for Collecting Data on Gay & Transgender Facebook Users,” Tech.co, November 5, 2024.
    [33] A 2024 FTC report found that “Large social media and video streaming companies have engaged in vast surveillance of users with lax privacy controls and inadequate safeguards for kids and teens.”
    [34] YouTube’s algorithm pushes right-wing, explicit videos regardless of user interest or age, study finds
    [35] Private messaging apps should implement end-to-end encryption by default to protect LGBTQ people from persecution, stalking, and violence.
    [36] Lisa Schirch, Ravi Iyer, and Lena Slachmuijlder, “Toward Prosocial Tech Design Governance,” Toward Prosocial Tech Design Governance, December 21, 2023.

    More Publications from GLAAD

    stay tuned!