Join GLAAD and take action for acceptance.

    2024 Social Media Safety Index


    Launched in 2023, Meta’s app Threads is new to the SMSI Platform Scorecard, and receives a score of 51.

    Launched in 2023, Meta’s app Threads is new to the SMSI Platform Scorecard, and receives a score of 51. Covered by Instagram’s Community Guidelines, Threads has a comprehensive protected groups policy that protects LGBTQ users from hate, harassment, and discrimination on the platform. In its “Gender Identity Policy and User Tools” policy, and in tier 3 of its Bullying and Harassment policy, the company discloses a policy protecting transgender, nonbinary, and gender non-conforming users from targeted misgendering. However, the policy requires self-reporting,[1] the protections do not extend to public figures, and the company does not disclose a similar disclosure related to targeted deadnaming. In its 2023 Responsible Business Practices Report, Meta also discloses a commitment to diversifying its workforce, and publishes voluntarily self-disclosed data on its LGBTQ employees.

    However, Meta falls short of providing adequate policy protections for its LGBTQ users on several other important issues. Notably, Threads does not have a policy in place that expressly protects users from targeted deadnaming. While the company has a feature allowing users to add preferred pronouns to their user profiles, the company discloses that this option is currently not available to all users. The company also discloses only limited options for users to control who can see their gender pronouns. Moreover, Meta’s most recent Community Guidelines Enforcement report does not disclose any data for Threads, and it is therefore not clear how many pieces of content or accounts were restricted for violations to the platform’s policies protecting LGBTQ users.

    Key Recommendations:

    • Protect transgender, nonbinary, and gender non-conforming users (including public figures) from targeted deadnaming: The company should adopt a comprehensive policy that prohibits targeted deadnaming on Threads, explain in detail how this policy is enforced, and should not require self-reporting (the company should also update its targeted misgendering policy to not require self-reporting and to protect public figures). The company should also disclose that it employs various processes and technologies — including human and automated content moderation — to detect content and behaviors violating these policies.
    • Provide all users with tools for self-expression: The company should make its feature allowing users to add their gender pronouns to their user profiles available to all users, and provide them with more control over who can see their pronouns.
    • Be transparent about content and account restrictions: The company should publish transparency reporting for Threads showing the number of pieces of content and accounts restricted for violations to policies protecting LGBTQ users from hate, harassment, and discrimination.

    [1] For more information on how self-reporting requirements complicate the enforcement of targeted deadnaming and misgendering policies, please see GLAAD’s post “All Social Media Platform Policies Should Recognize Targeted Misgendering and Deadnaming as Hate Speech.”

    More Publications from GLAAD

    stay tuned!