Contact: press@glaad.org
Join GLAAD and take action for acceptance.
Trending
- Transgender Americans and Allies Look Ahead to Landmark Supreme Court Oral Argument on December 4
- Some of the Biggest ‘Drag Race’ Stars Are Touring the U.S. in ‘A Drag Queen Christmas’ Right Now — Here’s How to Get Tickets!
- WATCH: Exploring LGBTQ Representation and Jamaican Authenticity in HBO’s “Get Millie Black”
- The Cast of Netflix’s ‘Queer Eye’ Are Set to Tour and Headed to a City Near You With ‘The Fab Five Live!’
- Voters Send Powerful Message to Anti-Trans, Moms for Liberty-aligned Candidates
- ‘Drag: The Musical’ Evolves As One Of The Most Important Stories Being Told Off-Broadway As ‘Drag Race’ Alums Visit Cast
- The GLAAD Wrap: Premiere Dates for “Yellowjackets” and “XO, Kitty,” Trailers for “Laid” and “The Dragon Prince,” New Music by Omar Apollo, Kesha, and More!
- Besties Brunch at NewFest: Celebrating Queer Life, Love, and Resistance at the 36th Annual LGBTQ+ Film Festival
GLAAD’S FOURTH ANNUAL SOCIAL MEDIA SAFETY INDEX GIVES FAILING GRADES ON LGBTQ SAFETY TO MAJOR SOCIAL MEDIA PLATFORMS
TikTok Earns D+ while Facebook, Instagram, YouTube, Threads, and X All Graded F
Despite moderate score improvements since 2023 on LGBTQ safety, privacy, and expression, all platforms insufficiently protect LGBTQ users
May 21, 2024 – GLAAD, the world’s largest lesbian, gay, bisexual, transgender and queer (LGBTQ) media advocacy organization, today announced the findings of its fourth annual Social Media Safety Index (SMSI), the respected in-depth report on LGBTQ safety, privacy, and expression. Five of the six major social media platforms – YouTube, X/Twitter, and Meta’s Facebook, Instagram, and Threads – received failing F grades on the SMSI Platform Scorecard for the third consecutive year. TikTok earned a D+.
Read the full report now at: GLAAD.org/SMSI/2024
Offering specific findings and recommendations, the report calls on companies to urgently prioritize LGBTQ safety, especially to address the extraordinary quantities of anti-trans hate, harassment, and disinformation running rampant on their platforms. Despite score improvements from previous years, all companies fail to meet basic standards of many of the Scorecard’s 12 indicators which address a range of issues including data privacy, moderation transparency, training of content moderators, workforce diversity, and more.
From GLAAD President and CEO, Sarah Kate Ellis:
“Leaders of social media companies are failing at their responsibility to make safe products. When it comes to anti-LGBTQ hate and disinformation, the industry is dangerously lacking on enforcement of current policies. There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence and threats of violence, that social media platforms are responsible for and should act with urgency to address.”
In the 2024 SMSI Platform Scorecard, some platforms have shown improvements in their scores since last year. Others have fallen, and overall, the scores remain abysmal, with all platforms other than TikTok receiving F grades (TikTok reached a D+).
- TikTok: D+ — 67% (+10 points from 2023)
- Facebook: F — 58% (-3 points from 2023)
- Instagram: F — 58% (-5 points from 2023)
- YouTube: F — 58% (+4 points from 2023)
- Threads: F — 51% (new 2024 rating)
- Twitter: F — 41% (+8 points from 2023)
Created in partnership with Ranking Digital Rights (RDR), the SMSI Platform Scorecard looks at 12 LGBTQ-specific indicators and evaluates each of the six major platforms, drawing on RDR’s standard methodology to generate numeric ratings for each product with regard to LGBTQ safety. The SMSI Scorecard does not include indicators on enforcement of policies. GLAAD and other monitoring organizations repeatedly encounter failures in enforcement of community guidelines across major platforms.
Specific LGBTQ safety, privacy, and expression issues identified in the Platform Scorecard, and in the SMSI report in general, include:
- Inadequate content moderation and problems with policy development and enforcement (including issues with both failure to mitigate anti-LGBTQ content and over-moderation/suppression of LGBTQ users);
- Harmful algorithms and lack of algorithmic transparency; inadequate transparency and user controls around data privacy;
- An overall lack of transparency and accountability across the industry, among many other issues — all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to hate, harassment, and discrimination.
This year’s report also illuminates the epidemic of anti-LGBTQ hate, harassment, and disinformation across major social media platforms, and especially makes note of high-follower hate accounts and right-wing figures who continue to manufacture and circulate most of this activity.
Key Conclusions of the 2024 SMSI include:
- Anti-LGBTQ rhetoric and disinformation on social media translates to real-world offline harms.
- Platforms are largely failing to successfully mitigate dangerous anti-LGBTQ hate and disinformation and frequently do not adequately enforce their own policies regarding such content.
- Platforms also disproportionately suppress LGBTQ content, including via removal, demonetization, and forms of shadowbanning.
- There is a lack of effective, meaningful transparency reporting from social media companies with regard to content moderation, algorithms, data protection, and data privacy practices.
Core Recommendations:
- Strengthen and enforce existing policies that protect LGBTQ people and others from hate, harassment, and misinformation/disinformation, and also from suppression of legitimate LGBTQ expression.
- Improve moderation including training moderators on the needs of LGBTQ users, and moderate across all languages, cultural contexts, and regions. This also means not being overly reliant on AI.
- Be transparent with regard to content moderation, community guidelines, terms of service policy implementation, algorithm designs, and enforcement reports. Such transparency should be facilitated via working with independent researchers.
- Stop violating privacy/respect data privacy. To protect LGBTQ users from surveillance and discrimination, platforms should reduce the amount of data they collect, infer, and retain. They should cease the practice of targeted surveillance advertising, including the use of algorithmic content recommendation. In addition, they should implement end-to-end encryption by default on all private messaging to protect LGBTQ people from persecution, stalking, and violence.
- Promote civil discourse and proactively message expectations for user behavior, including respecting platform hate and harassment policies.
From GLAAD’s Senior Director of Social Media Safety Jenni Olson:
“In addition to these egregious levels of inadequately moderated anti-LGBTQ hate and disinformation, we also see a corollary problem of over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content. Meta’s recent policy change limiting algorithmic eligibility of so-called ‘political content,’ which the company partly defines as: ‘social topics that affect a group of people and/or society large’ is especially concerning.”
GLAAD’s SMSI Advisory Committee
Providing expert input and guidance on the project, the GLAAD SMSI advisory committee includes respected leaders working at the intersections of tech accountability and LGBTQ social justice. Committee members include: ALOK, writer, performer, and media personality; Lucy Bernholz, Ph.D, Director, Digital Civil Society Lab at Stanford University; Alejandra Caraballo, Esq., Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society at Harvard Law School; Joan Donovan Ph.D, Founder, Critical Internet Studies Institute and Assistant Professor of Journalism and Emerging Media Studies, Boston University; Jelani Drew-Davi, Senior Communications Specialist, Kairos; Liz Fong-Jones, Field CTO, Honeycomb; Evan Greer, Director, Fight for the Future; Leigh Honeywell, CEO and Co-Founder, Tall Poppy; Maria Ressa, Journalist & CEO, Rappler; Tom Rielly, Founder, TED Fellows Program and Founder, PlanetOut.com; Dr. Sarah T. Roberts, Faculty Director, UCLA Center for Critical Internet Inquiry; Brennan Suen, Deputy Director of External Affairs, Media Matters for America; Kara Swisher, editor-at-large, New York Magazine; Marlena Wisniak, Senior Advisor, Digital Rights, European Center for Not-for-Profit Law.
The Social Media Safety Index was created with support from Craig Newmark Philanthropies, the Gill Foundation, and Logitech.
About the GLAAD Social Media Safety Program:
As the leading national LGBTQ media advocacy organization GLAAD is working every day to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. GLAAD’s Social Media Safety (SMS) program researches, monitors, and reports on a variety of issues facing LGBTQ social media users — with a focus on safety, privacy, and expression. The SMS program has consulted directly with platforms and tech companies on some of the most significant LGBTQ policy and product developments over the years. In addition to ongoing advocacy work with platforms (including TikTok, X/Twitter, YouTube, and Meta’s Facebook, Instagram, Threads, and others), and issuing the highly-respected annual Social Media Safety Index (SMSI) report, the SMS program produces resources, guides, publications, and campaigns, and actively works to educate the general public and raise awareness in the media about LGBTQ social media safety issues, especially anti-LGBTQ hate and disinformation.
About GLAAD:
GLAAD rewrites the script for LGBTQ acceptance. As a dynamic media force, GLAAD tackles tough issues to shape the narrative and provoke dialogue that leads to cultural change. GLAAD protects all that has been accomplished and creates a world where everyone can live the life they love. For more information, please visit www.glaad.org or connect @GLAAD on social media.
Add A Comment
Share this
Join GLAAD and take action for acceptance.
Our Picks
Topics
Don't Miss
As part of its education and advocacy programming across the country, the GLAAD Media Institute,…