Contact: press@glaad.org
Join GLAAD and take action for acceptance.
Trending
- Post-Election, LGBTQ Leaders Across the U.S. Are Joining Together — Prepare to Meet a “New Crop of Activists”
- At 2nd Annual Stonewall Gala, Black, Brown, Queer, Trans Activists Remind Us Of Our Fight: “This Is Not New”
- Trans Day of Remembrance 2024: Honoring the Legacy of Jackie Shane
- TDOR: In Memoriam
- Nicole Maines Talks New Memoir “It Gets Better…Except When It Gets Worse”
- GLAAD Wins 12 Anthem Awards and Named Nonprofit of the Year
- Rep. Mace Targets Rep.-elect Sarah McBride with Anti-trans Resolution
- Logo’s ‘Spill’ with Johnny Sibilly Returns; Guests Include Laverne Cox, David Archuleta, Gigi Goode and More
GLAAD’S THIRD ANNUAL SOCIAL MEDIA SAFETY INDEX SHOWS ALL FIVE MAJOR SOCIAL MEDIA PLATFORMS FAIL ON LGBTQ SAFETY AND UNDERSCORES HOW ONLINE HATE AND MISINFORMATION MANIFEST INTO REAL-WORLD HARM FOR LGBTQ PEOPLE
Twitter becomes the most dangerous social platform for LGBTQ people as hate speech continues to be on the rise and widely unchecked
Facebook, Instagram, TikTok,YouTube still insufficiently protect LGBTQ users despite small improvements on LGBTQ safety, privacy, and expression
June 15, 2023 – GLAAD, the world’s largest lesbian, gay, bisexual, transgender and queer (LGBTQ) media advocacy organization, today announced the findings of its third annual Social Media Safety Index (SMSI), a report on LGBTQ user safety. All five major social media platforms – Facebook, Instagram, TikTok, YouTube and Twitter – received low and failing scores on the SMSI Platform Scorecard for the second consecutive year. The SMSI found that the platforms continue to fail at enforcing the safeguarding of LGBTQ users from online hate speech, fail at providing transparency in the use of LGBTQ-specific user data and fail in expressing commitments to protecting LGBTQ users, specifically, policies and commitments to protect transgender, nonbinary, and gender non-conforming users from being targeted.
GLAAD’s 2023 SMSI also takes a comprehensive look at how anti-LGBTQ online hate speech incites, spreads and manifests real-world harm and violence. The study explores the realities around how false and baseless tropes, hateful slurs and rhetoric, misinformation and lies and conspiracy theories about LGBTQ people often go unchecked across social platforms and that hate speech and other content policies are often not enforced around such content. GLAAD calls on social media platforms to take responsibility for ineffective policies, products and algorithms that create a dangerous environment for LGBTQ users, noting that actions from the platforms are limited because “enragement leads to profitable engagement.”
GLAAD’s Platform Scorecard in the 2023 SMSI found Twitter as the most dangerous platform for LGBTQ people. Of the five major platforms included in this study, Twitter was the only platform with scores that declined from last year’s report.
From GLAAD President and CEO, Sarah Kate Ellis: “Dehumanizing anti-LGBTQ content on social media such as misinformation and hate have an outsized impact on real world violence and harmful anti-LGBTQ legislation, but social media platforms too often fail at enforcing their own policies regarding such content. Especially as many of the companies behind these platforms recognize Pride month, they should recognize their roles in creating a dangerous environment for LGBTQ Americans and urgently take meaningful action.”
Read the full report now at: GLAAD.org/SMSI
GLAAD’s Social Media Safety Index, launched in 2021, is the industry’s first standard for tackling online anti-LGBTQ hate, and increasing safety for LGBTQ social media users.
In part as a result of the company’s April 2023 removal of trans and nonbinary user protections, Twitter was the sole platform to decrease its score from 2022, diving 12 points from 45% to 33%.
The report also documents inconsistencies from other social media platforms. For example, Meta’s hate speech policies state a prohibition of anti-LGBTQ “groomer” content, but the SMSI shows that the company regularly does not enforce the policy.
The SMSI Platform Scorecard: GLAAD’s SMSI Scorecard, and the SMSI report overall have a clear throughline: Social media platforms must do better and follow through on their commitments set out in their own policies, products and safeguards. Platforms must improve inadequate content moderation and enforcement (including issues with both failure to act on anti-LGBTQ hateful content and over-moderation/censorship of LGBTQ users); harmful and polarizing algorithms; and an overall lack of transparency and accountability across the industry, among many other issues. LGBTQ users, as well as other marginalized communities who are uniquely vulnerable to hate, harassment, and discrimination, bear the brunt of these issues.
Created in partnership with Goodwin Simon Strategic Research and the tech accountability organization, Ranking Digital Rights, the SMSI Platform Scorecard is an evaluation of LGBTQ safety, privacy, and expression on five major platforms (Facebook, Instagram, TikTok, YouTube, and Twitter) based on 12 LGBTQ-specific indicators including explicit protections from hate and harassment for LGBTQ users, offering gender pronoun options on profiles, and prohibiting advertising that could be harmful and/or discriminatory to LGBTQ people. All platforms’ scores, except that of Twitter, improved from 2022 scores. Safety and the quality of safeguarding of LGBTQ users remain unsatisfactory.
- Instagram: 63% (+15 points from 2022)
- Facebook: 61% (+15 points from 2022)
- TikTok: 57% (+14 points from 2022)
- YouTube: 54% (+9 points from 2022)
- Twitter: 33% (-12 points from 2022)
A description of scores and recommendations for each platform are included in the full report here.
Real World Harms: How Online Hate Turns Into Offline Violence
There are very real resulting harms to LGBTQ people online, including a chilling effect on LGBTQ freedom of expression for fear of being targeted, and the sheer traumatic psychological impact of being relentlessly exposed to slurs and hateful conduct. GLAAD has documented over 160 acts or threats of violence at LGBTQ events so far in 2023, and GLAAD’s recent Accelerating Acceptance report found that 86% of non-LGBTQ Americans agree that exposure to online hate content leads to real-world violence. Hate-driven false narratives and conspiracy theories aiming to position LGBTQ people as “groomers” continue to be freely circulated on social media, causing tangible harm to the community.
Relatedly, the problem of anti-LGBTQ hate speech and disinformation continues to be an alarming public health and safety issue, as flagged in the Surgeon General’s report on social media and youth mental health.
As documented in GLAAD’s SMSI, even when social media platforms have policies in place to mitigate dangerous hate speech and disinformation, they largely fail to adequately enforce those policies. They also disproportionately suppress LGBTQ content, including via removal, demonetization, and forms of “shadowbanning,” which is a term for making a user’s posts and comments no longer visible to other users without any official ban notification.
From GLAAD’s Senior Director of Social Media Safety Jenni Olson: “There is an urgent need for effective regulatory oversight of the tech industry — and especially social media companies — with the goal of protecting LGBTQ people, and all people, from the dangerous impacts of an industry that continues to prioritize corporate profits over the public interest. The status quo in which anti-LGBTQ hate, harassment, and malicious disinformation continue to flow freely on their platforms compounds an already-dangerous reality for LGBTQ, and especially trans and nonbinary, people online and offline.”
GLAAD’s Recommendations
In addition to the Platform Scorecard, GLAAD’s SMSI provides specific recommendations to each platform to improve LGBTQ user safety. Recommendations to all platforms include:
- Strengthen and enforce existing policies that protect LGBTQ people and others from hate, harassment, and mis- dis- and malinformation (MDM), and also from suppression of legitimate LGBTQ expression.
- Improve moderation including training moderators on the needs of LGBTQ users, and moderate across all languages, cultural contexts, and regions. This also means not being overly reliant on AI alone.
- Be transparent with regard to content moderation, community guidelines, terms of service policy implementation, algorithm designs, and enforcement reports. Such transparency should be facilitated via working with independent researchers.
- Stop violating privacy/respect data privacy. To protect LGBTQ users from surveillance and discrimination, platforms should reduce the amount of data they collect and retain. They should implement end-to-end encryption by default on all private messaging to protect LGBTQ people from persecution, stalking, and violence. And cease the practice of targeted surveillance advertising, including the use of powerful algorithms to recommend content, potentially outing users.
- Promote civil discourse and proactively message expectations for user behavior (including actually respecting platform hate and harassment policies).
How GLAAD is Taking Action
As underscored in this study’s comprehensive analysis, GLAAD takes social media safety seriously for LGBTQ users. While GLAAD remains committed to holding social media giants and their leadership accountable for how LGBTQ people, issues and the community are represented on social media platforms, action is needed now to adequately protect LGBTQ users in several key areas from privacy, transparency, expression and more. Grounded in this year’s SMSI findings, here’s how GLAAD plans to accelerate acceptance and safety for LGBTQ people across social media:
- GLAAD will continue its participation with the #StopToxicTwitter coalition, a group of more than 60 organizations, calling on Twitter’s top advertisers to accept nothing less than a safe platform for their brands.
- GLAAD and the Anti-Defamation League (ADL) will continue its partnership together. In late 2022, GLAAD and the ADL announced the creation of an analyst position in ADL’s Center on Extremism dedicated to tracking and countering threats against the LGBTQ community.
- GLAAD along with other tech and LGBTQ organizations will continue to work with platforms to create space and provide guidance on re-prioritizing action items based on the findings of their platform-specific scorecard results. This includes:
- addressing the implementation of commitment to LGBTQ expression and privacy across the company;
- encouraging platforms to publish comprehensive data on how policies protecting LGBTQ users are enforced.
- taking action on making express policies to protect transgender, non-binary, and gender non-conforming users.
- GLAAD plans to launch a content and education series on understanding how anti-LGBTQ conspiracy theories and disinformation spread.
- GLAAD plans to create guides for journalists specifically around Social Media Safety, to assist newsrooms at all levels report accurately and fairly on issues relating to platform safety for LGBTQ users.
- GLAAD plans to enhance existing GLAAD Media Institute (GMI) trainings and workshops with latest learnings and findings from the 2023 GLAAD SMSI, including integrating new curricula into the overall GMI portfolio
GLAAD’s SMSI Advisory Committee
To create the Social Media Safety Index, GLAAD convened an advisory committee of thought leaders to advise on industry and platform-specific recommendations in the Index. Committee members include ALOK, writer, performer, and media personality; Lucy Bernholz, Ph.D, Director, Digital Civil Society Lab at Stanford University; Alejandra Caraballo, Esq., Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society at Harvard Law School; Jelani Drew-Davi, Director of Campaigns, Kairos; Liz Fong-Jones, Field CTO, Honeycomb; Evan Greer, Director, Fight for the Future; Leigh Honeywell, CEO and Co-Founder, Tall Poppy; Maria Ressa, Journalist & CEO, Rappler; Tom Rielly, Founder, TED Fellows program, PlanetOut.com; Dr. Sarah T. Roberts, Faculty Director, UCLA Center for Critical Internet Inquiry; Brennan Suen, Deputy Director of External Affairs, Media Matters for America; Kara Swisher, editor-at-large, New York Magazine; Marlena Wisniak, Senior Advisor, Digital Rights, European Center for Not-for-Profit Law.
The Social Media Safety Index was created with support from Craig Newmark Philanthropies, the Gill Foundation, and Logitech.
Read the full report now at: GLAAD.org/SMSI
###
About GLAAD’s Social Media Safety Program: As the leading national LGBTQ media advocacy organization GLAAD is working every day to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. GLAAD’s Social Media Safety program actively researches, monitors and reports on a variety of issues facing LGBTQ social media users — with a focus on safety, privacy and expression — advocating for solutions in numerous realms: online hate and harassment, AI bias, polarizing algorithms, data privacy, and more. The annual Social Media Safety Index (SMSI) provides recommendations for the industry at large and reports on LGBTQ user safety across the five major social media platforms: Facebook, Instagram, Twitter, YouTube, and TikTok
About GLAAD:
GLAAD rewrites the script for LGBTQ acceptance. As a dynamic media force, GLAAD tackles tough issues to shape the narrative and provoke dialogue that leads to cultural change. GLAAD protects all that has been accomplished and creates a world where everyone can live the life they love. For more information, please visit www.glaad.org or connect @GLAAD on social media.
Add A Comment
Share this
Join GLAAD and take action for acceptance.
Our Picks
Topics
Don't Miss
The FBI issued a statement on November 15th and included details of text campaigns that…