2024 Social Media Safety Index

Executive Summary, Key Conclusions and Recommendations, and Methodology

“When social media platforms use algorithms that amplify hate, fail to enforce their own policies against hate, and profit off the targeting of communities, people suffer — and democracy is undermined.”
— The Leadership Conference on Civil and Human Rights[1]

In addition to the annual Platform Scorecard ratings and recommendations, this year’s Social Media Safety Index (SMSI) report provides an overview of the current state of LGBTQ social media safety, privacy, and expression, including sections on: the economy of hate and disinformation, predominant anti-LGBTQ tropes, policy best practices, suppression of LGBTQ content, the connections between online hate and offline harm, regulation and oversight, AI, data protection and privacy, and more.

In the 2024 SMSI Platform Scorecard, while some platforms have shown improvements in their scores and others have fallen, overall the scores remain abysmal, with all platforms other than TikTok receiving F grades (TikTok reached a D+).

An Important Note About The Scorecard Ratings: While the six platforms all have policies prohibiting hate and harassment on the basis of sexual orientation, gender identity and/or expression, and other protected characteristics, the SMSI Scorecard does not include indicators to rate them on enforcement of those policies. GLAAD and other monitoring organizations repeatedly encounter failures in enforcement of community guidelines across major platforms. However, given the difficulty involved in assessing enforcement methodologically — which is further complicated by a relative lack of transparency from the companies — these failures are not quantified in the scores below.

2024 SMSI Platform Grades
2024 SMSI Platform Grades

Specific LGBTQ safety, privacy, and expression issues identified in the Platform Scorecard, and in the SMSI report in general, include: inadequate content moderation and problems with policy development and enforcement (including issues with both failure to mitigate anti-LGBTQ content and over-moderation/suppression of LGBTQ users); harmful algorithms and lack of algorithmic transparency; inadequate transparency and user controls around data privacy; and an overall lack of transparency and accountability across the industry, among many other issues — all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to hate, harassment, and discrimination.

These areas of concern are exacerbated for those who are members of multiple communities, including people of color, women, immigrants, people with disabilities, religious minorities, and more. Social media platforms should be safe for everyone, in all of who we are.

Like the 2023 Social Media Safety Index (SMSI), this year’s report illuminates the epidemic of anti-LGBTQ hate, harassment, and disinformation across the major social media platforms: TikTok, X, YouTube, and Meta’s Instagram, Facebook, and newly added Threads. The report especially makes note of the high-follower hate accounts and right-wing figures who continue to manufacture and circulate most of this activity.[2] The devastating impact of hate, disinformation, and conspiracy theory content continues to be one of the most consequential issues of our time, with hate-driven and politically-motivated false narratives running rampant online and offline and causing real-world harm to our collective public health, safety, and democracy. As a major January 2024 Online Extremism report from the U.S. Government Accountability Office (GAO) notes: “Research suggests the occurrence of hate crimes is associated with hate speech on the internet [and] suggests individuals radicalized on the internet can perpetrate violence as lone offenders.”

Targeting historically marginalized groups, including LGBTQ people, with fear-mongering, lies, and bigotry is both an intentional strategy of bad actors for attempting to consolidate political power, as well as being a lucrative enterprise (for the right-wing figures and groups that drive such campaigns and tropes, and for the multi-billion dollar tech companies that host them). It’s clear that regulatory oversight of the entire industry is necessary to address, as the Global Disinformation Index puts it: “the perverse incentives that drive the corruption of our information environment.”[3]

In addition to egregious levels of inadequately moderated anti-LGBTQ material across platforms (for example see GLAAD’s recent report, Unsafe: Meta Fails to Moderate Extreme Anti-trans Hate Across Facebook, Instagram, and Threads), we also see the corollary problem of over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators,[4] mis-labeling of LGBTQ content as “adult,” unwarranted demonetization of LGBTQ material under such policies, shadowbanning,[5] and similar suppression of LGBTQ content.[6] Meta’s recent policy change limiting algorithmic eligibility of so-called “political content” (partly defined by Meta as: “social topics that affect a group of people and/or society large”) is especially concerning.[7]

There’s nothing unusual or surprising about the fact that companies prioritize their corporate profits and bottom line over public safety and the best interests of society (which is why we have regulatory agencies to oversee major industries). Unfortunately social media companies are currently woefully under-regulated. And platform safety concerns have risen, particularly over the past year, as so many of the world’s largest social media companies have slashed content moderation teams. In an NBC News feature about the wave of layoffs an anonymous X staffer reflected that, “hateful conduct and potentially violent conduct has really increased.”[8] Such downsizing is also negatively impacting the fight against disinformation on social media platforms, as outlined at length in the 2023 Center for Democracy and Technology report Seismic Shifts.[9] The impacts of these business decisions on our information environment will no doubt continue to worsen as major social media companies have shifted towards policies allowing highly-consequential, known false information and hate-motivated extremism to proliferate on their platforms.[10]

Photo credit - CC-BY-Mapbox-Uncharted-ERG_Mapbox-c090
The Gender Spectrum Collection

Key Conclusions of the 2024 SMSI include:

  • Anti-LGBTQ rhetoric and disinformation on social media translates to real-world offline harms.
  • Platforms are largely failing to successfully mitigate dangerous anti-LGBTQ hate and disinformation and frequently do not adequately enforce their own policies regarding such content.
  • Platforms also disproportionately suppress LGBTQ content, including via removal, demonetization, and forms of shadowbanning.
  • There is a lack of effective, meaningful transparency reporting from social media companies with regard to content moderation, algorithms, data protection and data privacy practices.

Core Recommendations:

  • Strengthen and enforce existing policies that protect LGBTQ people and others from hate, harassment, and misinformation/disinformation, and also from suppression of legitimate LGBTQ expression.
  • Improve moderation including training moderators on the needs of LGBTQ users, and moderate across all languages, cultural contexts, and regions. This also means not being overly reliant on AI.[11]
  • Be transparent with regard to content moderation, community guidelines, terms of service policy implementation, algorithm designs, and enforcement reports. Such transparency should be facilitated via working with independent researchers.[12]
  • Stop violating privacy/respect data privacy. To protect LGBTQ users from surveillance and discrimination, platforms should reduce the amount of data they collect, infer, and retain. They should cease the practice of targeted surveillance advertising, including the use of algorithmic content recommendation.[13] And should implement end-to-end encryption by default on all private messaging to protect LGBTQ people from persecution, stalking, and violence.[14]
  • Promote civil discourse and proactively message expectations for user behavior, including respecting platform hate and harassment policies.[15]

The chief emphasis of this report is on the state of LGBTQ safety with regard to the platforms; while it should be acknowledged that there are many positive initiatives and activities these companies continue to implement to protect their LGBTQ users, they simply must do so much more. It is also vitally important to remember the crucial role social media platforms play for LGBTQ people. As noted in our section on regulatory oversight, proposed legislative social media safety solutions should be mindful of not causing unintended harm to LGBTQ users, especially LGBTQ youth. As a US-based organization GLAAD’s focus is primarily domestic; however there are enormous global implications of this work, and we call upon platforms to take responsibility for the worldwide impacts and safety of their products.

Methodology

In preparing this year’s report, GLAAD reviewed thought leadership, research, journalism, and findings across the field of social media safety and platform accountability, consulting with the GLAAD SMSI advisory committee and other organizations and leaders in technology and social justice. The 2024 SMSI Articles and Reports Appendix features links to important relevant topics and developments in the field of LGBTQ social media safety (from the implications of Meta’s recent Instagram/Threads “political content” policy implementation, to anti-trans hate trope trends like “transvestigation,” to the super-charging of disinformation-for-profit via financially-incentivized anonymous accounts on X). Please also refer to the 2023, 2022, and 2021 SMSI reports and the 2024 Bibliography of Anti-LGBTQ Online To Offline Real World Harms, which remain substantial and valuable resources on these topics.

The centerpiece of this year’s report is the Platform Scorecard. Spearheaded by GLAAD’s independent researcher Andrea Hackl, working in partnership with Ranking Digital Rights (RDR), the 2024 Social Media Safety Index Platform Scorecard looks at 12 LGBTQ-specific indicators and evaluates each of the six major platforms drawing on RDR’s standard methodology to generate numeric ratings for each product with regard to LGBTQ safety. Researchers interested in digging deeper into the 12 LGBTQ-specific indicators may explore this 2024 Research Guidance document.

While this report is focused on the six major social media platforms, we know that other companies and platforms — from Snapchat to Spotify, Amazon to Zoom — can benefit from these recommendations as well. We strongly urge all platforms and companies to make the safety of their LGBTQ customers and users an urgent priority.

On the Firewall Between Financial Sponsorship and GLAAD’s Advocacy Work
Several of the companies that own products and platforms listed in this report are current financial sponsors of GLAAD, a 501(c)3 non-profit. A firewall exists between GLAAD’s advocacy work and GLAAD’s sponsorships and fundraising. As part of our media advocacy and work as a media watchdog, GLAAD has and will continue to publicly call attention to issues that are barriers to LGBTQ safety, as well as barriers to fair and accurate LGBTQ content and coverage — including issues originating from companies that are current financial sponsors.

More Publications from GLAAD

Nearly Invisible is the first GLAAD report that analyzes the inclusion of lesbian, gay, bisexual, transgender, and queer (LGBTQ) characters in primetime Spanish-language scripted television airing in the United States between July 1, 2015 and June 30, 2016. The report also analyzed the inclusion of characters of African and indigenous descent as well as characters with disabilities.

 

Read More

stay tuned!