Join GLAAD and take action for acceptance.

    State of The Field: LGBTQ Social Media Safety Reports

    Showcase of 2022–2023 Reports on LGBTQ Social Media Safety

    Since 2021, the GLAAD Social Media Safety Index (SMSI) report has offered a first-of-its-kind, dedicated analysis on LGBTQ safety, privacy, and expression across major social media platforms, including specific guidelines for these companies on how to better serve their LGBTQ users. In addition to the 2023 SMSI, there are now so many powerful reports and studies devoted to these issues — a selection of some of the most significant are showcased below, including important work from Ekō, Media Matters for America, the Anti-Defamation League (ADL), Center for Countering Digital Hate, Wilmer Hale, Human Rights Watch, UltraViolet, Women’s March, Kairos, Armed Conflict Location & Event Data Project, Human Rights Campaign (HRC), ISD Global, and the Global Project Against Hate and Extremism (GPAHE).

    Monetizing Hate: How 100+ Major Brands Are Bankrolling Anti-LGBTQ Extremism on YouTube
    Ekō — November 2023
    This hard-hitting expose from corporate accountability group Ekō details how ads for dozens of major brands, including Nike, J.Crew and L’Oréal, are appearing next to videos inciting violence and hatred against the LGBTQ+ community. The findings underscore significant failings in YouTube’s monetization and moderation systems, as well as reputational and business risks to some of the world’s biggest brands. The authors of the report are calling on advertisers to take a principled stand against funding hate and disinformation through a host of measures including demanding access to detailed information about ad placements, and enforcing business contracts regarding brand safety standards. The report authors are also demanding action from YouTube to bolster moderation and demonetize harmful channels, in addition to urging transparency interventions from US policymakers. YouTube’s Community Guidelines prohibit anti-LGBTQ+ hate speech and harmful content, however the report shows the policy isn’t being properly enforced. Researchers analyzed 13 monetized videos by well-known anti-LGBTQ+ figures seemingly breaching YouTube’s policies on hateful and derogatory content, violent content, and incendiary and demeaning content. The report found that at least 104 brands that have expressed public support for the LGBTQ+ community were found to be funding anti-LGBTQ hate content.

    TIMELINE: The impact of Libs of TikTok told through the educators, health care providers, librarians, LGBTQ people, and institutions that have been harassed and violently threatened
    Media Matters for America — November 2023
    This expansive report from Media Matters for America documented how harassment and threats of violence against at least 35 institutions, events, or individuals have followed incitement from the Libs of TikTok social media account. A USA Today feature confirmed the research. According to that article: “USA TODAY verified bomb, death and other threats in more than two dozen cases.” The dedicated year-round work of Media Matters bears special mention for being so extensive and prolific. Their dozens of dispatches and reports from their LGBTQ program can be found here

    2023 ADL Online Hate and Harassment Report
    The Anti-Defamation League (ADL) — June 2023
    The 2023 ADL Online Hate and Harassment Report found that LGBTQ people, especially transgender respondents, continue to be the most harassed among marginalized demographic groups online (“76% of transgender respondents have been harassed in their lifetimes, with 51% of transgender respondents being harassed in the past 12 months. After transgender respondents, LGBQ+ [lesbian, gay, bisexual and queer] people experienced the most harassment at 47% in the past 12 months). Additional powerful 2022-2023 short-form reports from the ADL (in partnership with GLAAD) focused on countering anti-LGBTQ+ extremism and hate, including these: At CPAC 2023, Anti-Transgender Hate Took Center Stage; Online Amplifiers of Anti-LGBTQ+ Extremism; and Antisemitism & Anti-LGBTQ+ Hate Converge in Extremist and Conspiratorial Beliefs. This December 2022 ADL Center for Tech and Society feature is an extremely important exploration of networked/stochastic harassment, an increasingly dangerous phenomenon. An additional pair of reports from GLAAD and the ADL in the summer of 2023 documented 500 incidents of anti-LGBTQ hate and extremism nationwide.

    Toxic Twitter: How Twitter Makes Millions from Anti-LGBTQ+ Rhetoric
    Center for Countering Digital Hate — March 2023
    “Twitter is making millions of dollars as anti-LGBTQ+ ‘grooming’ rhetoric jumps 119% under Elon Musk. Often targeting educators, pride events, or drag story hour events, the ‘grooming’ narrative demonizes the LGBTQ+ community with hateful tropes, using slurs like ‘groomer’ and ‘pedophile.’ The hateful ‘grooming’ narrative online is driven by a small number of influential accounts with large followings. Now new estimates from the Center show that just five of these accounts are set to generate up to $6.4 million per year for Twitter in ad revenues. These five accounts promote online hate that has been reported to have real-world violence, like harassment and threats, including some bomb threats.”

    Report On Google Civil Rights Audit
    WilmerHale — March 2023
    In this civil rights audit released by Google in March 2023, the most substantial LGBTQ guidance (which continues to be a recommendation of the Social Media Safety Index) is this: “unless violative content is covered within its existing hate speech, harassment, and cyberbullying policies, YouTube’s policies do not on their face prohibit intentional misgendering or deadnaming of individuals. Both acts have the potential to create an unsafe environment for users and real-world harm. We recommend Google review its policies to ensure it is appropriately addressing issues such as the intentional misgendering or deadnaming of individuals and continue to regularly review its hate and harassment policies to adapt to changing norms regarding protected groups.” Additional recommendations include that YouTube expand “mandatory unconscious bias and LGBTQ cultural sensitivity training” for its moderators; continue to “evaluate how YouTube’s products and policies are working for creators and artist communities of different races, ethnicities, gender identities, and sexual orientations;” and, “For ads that may consider gender for targeting purposes, Google should prioritize implementation of inclusive gender identity options for users and ensure targeting features respect those declarations.” 

    Digital Targeting and Its Offline Consequences for LGBT People in the Middle East and North Africa
    Human Rights Watch — February 2023
    “The targeting of LGBT people online is enabled by their precarious legal status… In the absence of protection by laws or sufficient digital platform regulations, both security forces and private individuals have been able to target LGBT people with impunity. Under the United Nations Guiding Principles on Business and Human Rights, social media companies have a responsibility to respect human rights, including the rights to nondiscrimination, privacy, and freedom of expression. Digital platforms, such as Meta (Facebook, Instagram), and Grindr, are not doing enough to protect users vulnerable to digital targeting… Digital platforms should invest in content moderation, particularly in Arabic, by quickly removing abusive content as well as content that could put users at risk. Platforms should conduct human rights due diligence that includes identifying, preventing, ceasing, mitigating, remediating, and accounting for potential and actual adverse impacts of digital targeting on human rights.”

    From URL to IRL: The Impact of social Media on People of Color, Women, and LGBTQ+ Communities
    UltraViolet, Women’s March, Kairos, and GLAAD — November 2022
    This November 2022 report commissioned by UltraViolet, GLAAD, Kairos, and Women’s March shows that women, people of color, and LGBTQ+ people experience higher levels of harassment and threats of violence on social media than other users. Among other key findings, the report shows that 57% of people have seen posts calling for physical violence based on a person’s race, gender, or sexuality. Additionally, LGBTQ+ people and women respondents report higher rates of harassment than other groups. The study further shows that 60% of LGBTQ people feel harmed not only from direct harassment and hate, but from witnessing harassment against other LGBTQ community members such as celebrities and public figures (compared to only 24 percent of the base sample). We know that high-follower hate accounts show a pattern of directing hateful content against LGBTQ celebrities as a vehicle for expressing general anti-LGBTQ bigotry. We also know that social media companies maintain policy loopholes that permit hateful content against public figures to remain on their platforms; this perpetuates harm against entire communities. If platforms truly believe in making their products safe for LGBTQ people, these loopholes should be re-evaluated.

    Fact Sheet: Anti-LGBT+ Mobilization on the Rise in the United States
    Armed Conflict Location & Event Data Project — November 2022
    “[In 2022] Acts of political violence targeting the LGBT+ community have more than tripled compared to 2021. With the role of social media platforms in the dissemination of mis-/disinformation — aggravated by ineffective content moderation policies and failures to quell the spread of false claims and conspiracy theories — the anti-LGBT+ narrative has reached far beyond the areas that have seen the highest concentration of offline activity.” Also see the Anti-LGBT+ Mobilization section of ACLED’s December 2022 report “From the Capitol Riot to the Midterms: Shifts in American Far-Right Mobilization Between 2021 and 2022.”

    Meta Profits Off Hateful Advertising
    Anti-Defamation League (ADL) — October 2022
    “ADL’s analysis found Meta has accepted large sums of money for ads on hateful topics such as antisemitism and transphobia… Despite Meta forbidding baseless accusations of “grooming” that target the LGBTQ+ community because they violate its hate speech policy, the company continues to profit off political ads promoting such hateful messages. An estimated 2.9 billion people use at least one of Meta’s platforms daily, including Facebook and Instagram. Meta states, ‘we have a responsibility to promote the best of what people can do together by keeping people safe and preventing harm.’ Yet Meta regularly fails its users by profiting from ads that promote antisemitism and homophobia. Meta is not only providing a platform that allows these hateful messages to reach thousands of users, the company is also giving these narratives a dangerous level of credibility with its audience.”

    Digital Hate: Social Media’s Role in Amplifying Dangerous Lies About LGBTQ+ People
    Human Rights Campaign and the Center for Countering Digital Hate — August 2022
    “Extremist politicians and their allies engineered an unprecedented and dangerous anti-LGBTQ+ misinformation campaign that saw discriminatory and inflammatory “grooming” content surge by over 400% across social media platforms. [Content that] platforms not only failed to crack down on, but also profited from… In a matter of mere days, just ten people drove 66% of impressions for the 500 most viewed hateful ‘grooming’ tweets — including Gov. Ron DeSantis’s press secretary Christina Pushaw, extremist members of Congress like Marjorie Taylor Greene and Lauren Boebert, and pro-Trump activists like ‘Libs of TikTok’ founder Chaya Raichick. On Facebook and Instagram, 59 paid ads promoted the same narrative. Despite similar policies prohibiting anti-LGBTQ+ hate content on both social media platforms, only one ad was removed.”

    A Snapshot of Anti-Trans Hatred in Debates around Transgender Athletes
    ISD Global — January 2022
    “According to Twitter and Meta policies, transgender individuals, together with all members of the LGBTQIA+ community, are a protected group that should be safeguarded from hate speech on their platforms. However, new ISD research has found that these policies are poorly enforced and still suffer from gaps in implementation.”

    Conversion Therapy Online: The Ecosystem & Conversion Therapy Online: The Players
    Global Project on Hate & Extremism (GPAHE) — January 2022
    The recommendations of these two powerful January 2022 reports from the Global Project on Hate & Extremism (GPAHE) offer clear and easy to implement best practice guidance for social media platforms to address content related to the harmful practice of so-called “conversion therapy” (which has been condemned by all major medical, psychiatric, and psychological organizations as dangerous and banned by dozens of countries and states). The 2023 Social Media Safety Index features an overview on the state of “conversion therapy” policies on social media platforms. In addition to GLAAD’s efforts urging platforms to add prohibitions against so-called “conversion therapy”content to their community guidelines, we also urge these companies to effectively enforce these policies.

    The 2023 SMSI Articles & Reports Appendix features links to dozens of additional reports of interest. 

    The 2024 GLAAD Social Media Safety Index will be released this coming summer.

    stay tuned!