Join GLAAD and take action for acceptance.

    2024 Social Media Safety Index

    Instagram

    Instagram receives a score of 58, a five-point decrease from last year’s score

    In the 2024 SMSI Platform Scorecard, Instagram receives a score of 58, a five-point decrease  from last year’s score. Instagram’s Community Guidelines continue to prohibit hate, discrimination, and harassment against LGBTQ users on the platform. The company also discloses comprehensive information on the types of behaviors and content that are prohibited under its protected groups policy. While Meta continues to have a policy in place that protects transgender, nonbinary, and gender non-conforming users from targeted misgendering, this policy requires self-reporting,[1] does not extend to public figures, and the company does not disclose a similar prohibition against targeted deadnaming.

    The company also continues to fall short of providing adequate transparency in several other key areas. According to company disclosure, Instagram’s feature allowing users to add pronouns to their profiles is currently not available for all users. In other important areas, the company has taken steps back when it comes to user transparency. While Meta’s “Gender Identity Policy and User Tools” policy discloses that content moderators were trained on gender identity policy enforcement in 2022, it is not clear whether the company has conducted a similar training in the last year.

    Key Recommendations:

    • Provide all users with tools for self-expression: The company should make its feature allowing users to add their gender pronouns to their profiles available to all users, and give users more granular options to control who can see their pronouns.
    • Protect transgender, nonbinary, and gender non-conforming users (including public figures) from targeted deadnaming: The company should adopt a comprehensive policy that prohibits targeted deadnaming on Instagram, explain in detail how this policy is enforced, and should not require self-reporting (the company should also update its targeted misgendering policy to not require self-reporting, and to protect public figures). The company should also disclose that it employs various processes and technologies — including human and automated content moderation — to detect content and behaviors violating these policies.
    • Train content moderators on the needs of LGBTQ users: Meta should disclose that it continues to provide comprehensive training for content moderators that educates them on the needs of LGBTQ people and other users in protected categories.

    [1] For more information on how self-reporting requirements complicate the enforcement of targeted deadnaming and misgendering policies, please see GLAAD’s post: “All Social Media Platform Policies Should Recognize Targeted Misgendering and Deadnaming as Hate Speech.”

    More Publications from GLAAD

    This report brings an academic and personal voice of the impact that the COVID-19 pandemic has had to-date on the fight to end the HIV epidemic, while also providing recommendations/ needs from people at community based organizations (CBOs) who serve and support the community. We underscore the disruption in access to HIV prevention and care services due to mitigation measures imposed in the early days of the COVID-19 pandemic, which will have implications for many years to come. We also highlight innovation to HIV service delivery that provided an important bridge between healthcare professionals and clients in an unprecedented time. Our recommendations will help sustain the fight against HIV in the United States in the midst of this pandemic, and future health emergencies.

    Read More

    stay tuned!