2024 Social Media Safety Index

Instagram

Instagram receives a score of 58, a five-point decrease from last year’s score

In the 2024 SMSI Platform Scorecard, Instagram receives a score of 58, a five-point decrease  from last year’s score. Instagram’s Community Guidelines continue to prohibit hate, discrimination, and harassment against LGBTQ users on the platform. The company also discloses comprehensive information on the types of behaviors and content that are prohibited under its protected groups policy. While Meta continues to have a policy in place that protects transgender, nonbinary, and gender non-conforming users from targeted misgendering, this policy requires self-reporting,[1] does not extend to public figures, and the company does not disclose a similar prohibition against targeted deadnaming.

The company also continues to fall short of providing adequate transparency in several other key areas. According to company disclosure, Instagram’s feature allowing users to add pronouns to their profiles is currently not available for all users. In other important areas, the company has taken steps back when it comes to user transparency. While Meta’s “Gender Identity Policy and User Tools” policy discloses that content moderators were trained on gender identity policy enforcement in 2022, it is not clear whether the company has conducted a similar training in the last year.

Key Recommendations:

  • Provide all users with tools for self-expression: The company should make its feature allowing users to add their gender pronouns to their profiles available to all users, and give users more granular options to control who can see their pronouns.
  • Protect transgender, nonbinary, and gender non-conforming users (including public figures) from targeted deadnaming: The company should adopt a comprehensive policy that prohibits targeted deadnaming on Instagram, explain in detail how this policy is enforced, and should not require self-reporting (the company should also update its targeted misgendering policy to not require self-reporting, and to protect public figures). The company should also disclose that it employs various processes and technologies — including human and automated content moderation — to detect content and behaviors violating these policies.
  • Train content moderators on the needs of LGBTQ users: Meta should disclose that it continues to provide comprehensive training for content moderators that educates them on the needs of LGBTQ people and other users in protected categories.

[1] For more information on how self-reporting requirements complicate the enforcement of targeted deadnaming and misgendering policies, please see GLAAD’s post: “All Social Media Platform Policies Should Recognize Targeted Misgendering and Deadnaming as Hate Speech.”

More Publications from GLAAD

Stonewall 50: A Journalist’s Guide to Reporting on the 50th Anniversary of Stonewall and the Legacy of Pride seeks to inform journalists about the history of the Stonewall Uprising and its impact on the LGBTQ movement into the modern day. The guidebook covers several topics: story ideas when covering Stonewall 50, a history of the Stonewall Inn, an overview of significant events in the modern LGBTQ movement, and a focused discussion on the issues the LGBTQ movement faces today– both in the USA and around the world.

Read More

The GLAAD Studio Responsibility Index (SRI) maps the quantity, quality and diversity of lesbian, gay, bisexual, transgender, and queer (LGBTQ) characters in films released by the seven major motion picture studios during the 2018 calendar year. GLAAD researched films released by 20th Century Fox, Lionsgate, Paramount Pictures, Sony Pictures, Universal Pictures, Walt Disney Studios and Warner Bros., as well as films released by four subsidiaries of these major studios. The report is intended to serve as a road map toward increasing fair, accurate and inclusive LGBTQ representation in film.

Read More

stay tuned!