“LGBTQ people in America are under attack like never before.”
— Under Fire: The War on LGBTQ People in America, Movement Advancement Project (February 2023)
Looking back on 2023, we saw a proliferation of online hate and disinformation against LGBTQ people — rhetoric that has been echoed across political stages, in mass media, and among legislative bodies.
As GLAAD’s latest Social Media Safety Index (SMSI) documented, most major platforms continued to fail to protect LGBTQ people. Problems that we identified in the SMSI include: inadequate content moderation and enforcement (including both inaction on anti-LGBTQ hateful content and over-moderation/censorship of LGBTQ users); harmful and polarizing algorithms; and an overall lack of transparency and accountability across the industry, among many other issues — all of which disproportionately impact LGBTQ people and other marginalized communities on their platforms who are uniquely vulnerable to hate, harassment, and discrimination.
Online hate and disinformation continue to be alarming public health and safety issues that translate to offline harms. Over the past year, GLAAD and the Anti-Defamation League (ADL) recorded more than 700 incidents of violence and threats targeting LGBTQ people. More than 130 of those incidents targeted drag shows and performers specifically, with the vast majority making reference to “grooming,” a prominent anti-LGBTQ trope online.
Understanding Anti-LGBTQ Conspiracy Theories & Disinformation
High-follower hate accounts have promoted three particularly consequential anti-LGBTQ tropes over the past year: the “groomer” and “gender ideology” conspiracy theories, and the malicious mischaracterization of gender-affirming healthcare.
While not new, the wildly popular “groomer” libel is the vicious, false, and baseless assertion that LGBTQ people are sexualizing and “indoctrinating” children. While platforms including Meta, TikTok, and Reddit issued public statements in 2022 that the use of “groomer” as an anti-LGBTQ slur violated their hate speech policies, they have largely failed to enforce these policies. In a February 2023 report, for example, Media Matters researchers found that Meta has profited from more than 200 ads promoting “groomer” rhetoric since 2022.
Prominent anti-LGBTQ accounts (and far-right media outlets) also continue to spread extremist rhetoric that baselessly and falsely mischaracterizes evidence-based gender-affirming healthcare for trans youth as “child abuse,” “mutilation,” or “sterilization.” This harmful narrative is not rooted in science but perpetuates fear and hatred of transgender people, and their families, friends, allies, and healthcare providers (including threats and violence, as well as an epidemic of legislative attacks retracting the basic rights of transgender people across the U.S.) .
Finally, 2023 saw the rise of “gender ideology” as an anti-LGBTQ narrative online. Similar to the anti-trans trope of “transgenderism,” “gender ideology” disingenuously characterizes being transgender as an ideology rather than an intrinsic identity. In a chilling example widely shared online, as part of his March 2023 CPAC speech, far-right pundit Michael Knowles asserted that “Transgenderism [sic] must be eradicated from public life entirely.”
In addition to these harmful tropes, the dehumanizing practice of targeted misgendering and deadnaming of trans and nonbinary people (particularly public figures) also continues to be a popular mode of hate speech across many platforms. As noted in GLAAD’s Guide to Anti-LGBTQ Online Hate Speech, there are many additional examples and a long history to these disingenuous rhetorical strategies.
Regardless of motivation, anti-LGBTQ conspiracy theories, tropes, dog whistles, and other harmful rhetoric have terrible real-world impacts. Taking the “groomer” libel as an example, as a recent Institute for Strategic Dialogue (ISD) Global report summarizes: “Around the world today, the use of the term ‘groomer’ is used to justify hate, discrimination and violence against the LGBTQ+ community. In the US particularly, the use of this language, along with conspiratorial thinking around queer people, has led to legislation preventing the discussion of LGBTQ+ issues in schools and preventing trans children from accessing gender affirming healthcare, and has motivated attacks on LGBTQ+ individuals.”
The ISD report further explains: “Part of the success of this mainstreaming lies in the ability of fringe actors to manipulate the general public’s lack of knowledge of queer culture and particularly their insensitivity to the plight of trans people. This has been coupled with the most potent fear – that of people harming children, which has been used to justify hatred and irrationality for centuries. In reality, the ‘groomer’ slur harms those children who are most in need of support – queer and gender non-conforming children. According to The Trevor Project’s 2022 National Survey on LGBTQ Youth Mental Health, 45 percent of LGBTQ+ youth have seriously considered attempting suicide in the past year.”
Opportunities and Approaches for Mitigating Anti-LGBTQ Hate and Disinformation
We’ve seen many recent examples where platforms have applied their terms of service to remove, reduce visibility, demonetize, and/or add additional fact-checked information to content or accounts. For example, in April 2023, YouTube temporarily suspended the channel of anti-trans pundit Matt Walsh from receiving advertiser revenue after he repeatedly violated platform guidelines. Another recent mitigation is Meta’s AP Fact Check overlay on Facebook and Instagram videos by far-right commentator Liz Wheeler (who falsely claimed that “S” was added to the “LGBTQ” umbrella to represent “Satanist”). And then there’s pre-Musk Twitter’s account suspension of Jordan Peterson, another far-right commentator who relentlessly attacked Elliot Page when he came out as trans. (Peterson’s account, however, is now alive and well.) And there’s also TikTok’s permanent suspension of Gays Against Groomers and Libs of TikTok for their incessant anti-LGBTQ animus that so clearly violates the platform’s policies. Both accounts have also been repeatedly temporarily suspended from other platforms, including Meta’s Facebook and Instagram. Slack and Linktree have also permanently suspended Libs of TikTok — which, as USA Today reported in November 2023, has for years been engaging in networked harassment and posting content in ways that appear to be directly responsible for generating dozens of bomb threats, death threats, and other harassment.
On the one hand, these platform mitigations are commendable examples of enforcing basic hate speech policies. The most important takeaway from witnessing these instances of enforcement is that Meta, YouTube, Twitter, and TikTok are fully capable of implementing mitigations of bigotry. It’s clear that they could choose to protect LGBTQ people (and everyone) from these kinds of high-follower accounts posting anti-LGBTQ hate content that violates their own policies. But in far too many instances they don’t. Despite the existence of their own hateful conduct policies, they actively decide thousands of times a day to interpret such clearly violative material as being allowable — thereby diluting the purpose of those policies and ultimately encouraging hate accounts to push the envelope further and further as a way of increasing account engagement (all of this is of course motivated by the fact that such engagement generates revenue — not only for the accounts but, of course, also for the platforms themselves).
Removing violative content isn’t the only way for platforms to address anti-LGBTQ hate and disinformation. As evidenced from the various examples above, companies have a wide array of proactive and reactive options, tools, and modalities.
An extremely striking example of this clear ability to implement mitigation is illustrated in the damning November 2020 New York Times story, “Facebook Struggles to Balance Civility and Growth,” which describes how, in the days after the 2020 US Presidential election, the platform implemented an algorithm to demote posts it had determined were: “bad for the world,” but that, because of the resulting reduction in site engagement (and corresponding negative impact on revenue), the decision was made to “less stringently demote such content.”
There are many opportunities for proactive mitigation. For example, in the fall of 2022, following GLAAD’s recommendation, both TikTok and (pre-Musk) Twitter added a “Know Your Facts” info panel (linking to WHO and HHS respectively as sources of accurate information) to surface when users searched “Monkeypox,” “Mpox,” or “MPV.” (Unfortunately, Twitter/X has removed the info panel and rolled back other safeguards against medical misinformation.) This was a small but significant feature that has actively helped to stem the tide of misinformation about Mpox, which has also often been entwined with anti-LGBTQ hate and misinfo.
This particular example beautifully underscores how institutions across all of civil society — in particular social media platforms — can play constructive roles in serving the public good.
GLAAD urgently calls on all social media platforms to take responsibility for the safety of their products — for the sake of their LGBTQ users, and for everyone.
A version of this piece appeared in the 2023 Social Media Safety Index (SMSI). The next SMSI is forthcoming in 2024.