Contact: press@glaad.org
Join GLAAD and take action for acceptance.
Trending
- Trans Day of Remembrance 2024: Honoring the Legacy of Jackie Shane
- TDOR: In Memoriam
- Nicole Maines Talks New Memoir “It Gets Better…Except When It Gets Worse”
- GLAAD Wins 12 Anthem Awards and Named Nonprofit of the Year
- Rep. Mace Targets Rep.-elect Sarah McBride with Anti-trans Resolution
- Logo’s ‘Spill’ with Johnny Sibilly Returns; Guests Include Laverne Cox, David Archuleta, Gigi Goode and More
- FBI Issues Alert on Anti-LGBTQ and Racist Text Messages
- WATCH: Dominique Jackson Talks Mutual Aid, Safety, and More with Transgender Community Leaders for #TransgenderAwarenessWeek with GLAAD and Gilead Sciences
GLAAD DECRIES META’S ONGOING FAILURE TO MODERATE ANTI-TRANS HATE CONTENT
GLAAD RESPONDS TO NEW STATEMENT FROM META:
“The company has lost the trust of its LGBTQ users, and it’s going to take real action to begin to get it back.”
Meta CEO Mark Zuckerberg Ignores Call for Safety Plan to Address Epidemic of Anti-Trans Content
(New York, NY — March 15, 2024) Today, GLAAD, the world’s largest lesbian, gay, bisexual, transgender, and queer (LGBTQ) media advocacy organization condemned Meta’s negligence after the company’s failure to moderate anti-trans hate content across its platforms, as Meta issued a new statement addressing the Oversight Board’s January 2024 ruling in the important “Post in Polish Targeting Trans People” case (a post which featured an image in the colors of the transgender flag and violent anti-transgender hate speech). Full details on the ruling are here. The Oversight Board ruling cited “Meta’s repeated failure to take the correct enforcement action” on the extreme anti-trans hate post, which was clearly in violation of Meta’s own policies.
Today, nearly two months after the Oversight Board issued their decision in the case, Meta responded to the ruling. Meta’s Transparency Center states that the company is “assessing feasibility” of ensuring that “flag-based visual depictions of gender identity that do not contain a human figure are understood as representations of a group defined by the gender identity of its members” to “clarify instructions for enforcement of this form of content at scale, whenever it contains a violating attack.”
On January 16, the Oversight Board — the body that makes non-binding but precedent-setting rulings about Facebook, Instagram, and Threads content moderation cases — overturned Meta’s original repeated decisions to not take down a Facebook post targeting transgender people with violent speech. The post was an egregious example of anti-trans hate advocating for transgender people to commit suicide, featuring an image of a striped curtain in the blue, pink and white colors of the transgender flag with a text overlay in Polish saying: ‘New technology. Curtains that hang themselves.’ The post was repeatedly flagged by community members, but Meta’s content moderators allowed the post to remain. The post was only removed after the Oversight Board alerted Meta. The case illuminates systemic failures with the company’s moderation practices — including widespread failure to enforce their own policies, as noted by the Oversight Board.
GLAAD responded today:
“Meta’s ongoing failures to enforce their own policies against anti-LGBTQ, and especially anti-trans hate is simply unacceptable,” said GLAAD President and CEO Sarah Kate Ellis. “GLAAD, HRC, and 250+ LGBTQ celebrities, leaders, and allies directly urged executives at Meta nine months ago to address the anti-trans violence and hate on its platforms, but Meta has taken no action in response. The company has lost the trust of its LGBTQ users, and it’s going to take real action to begin to get it back.”
After January’s Oversight Board decision announcement, GLAAD re-escalated a June 2023 open letter from GLAAD, HRC, and 250+ LGBTQ celebrities and allies calling on Meta and CEO Mark Zuckerberg to create and publicly share a plan of action for addressing the epidemic of anti-trans hate on Meta platforms. Nine months later, Meta has not issued a response.
As highlighted in GLAAD’s 2023 Social Media Safety Index (SMSI) report, Meta is largely failing to mitigate dangerous anti-trans and anti-LGBTQ hate and disinformation, despite such content violating their own policies. The June 2023 SMSI also made the specific recommendation to Meta and others that they must better train moderators on the needs of LGBTQ users, and enforce policies around anti-LGBTQ content across all languages, cultural contexts, and regions.
Additional Background On the Oversight Board Case:
In its decision, the Oversight Board ruled that the anti-trans post violated both Meta’s Hate Speech and Suicide and Self-Injury Community Standards. The Board ruling (which echoes GLAAD’s longstanding guidance to Meta) states that: “the fundamental issue in this case is not with the policies, but their enforcement. Meta’s repeated failure to take the correct enforcement action, despite multiple signals about the post’s harmful content, leads the Board to conclude the company is not living up to the ideals it has articulated on LGBTQIA+ safety. The Board urges Meta to close enforcement gaps, including by improving internal guidance to reviewers [content moderators].”
GLAAD’s September 2023 Public Comment to the Oversight Board for their adjudication of the case noted that: “Meta’s content moderators should have accurately enforced its policies in the first place. It is a serious problem that the post was only removed after the Oversight Board alerted Meta. This case powerfully illuminates highly consequential systemic failures with the company’s moderation practices that have broad implications for all anti-LGBTQ hate content, as well as for content that targets all historically marginalized groups. Such moderation may be more complex than recognizing basic slurs, but this is why the company must provide adequate training and guidance to its moderators on recognizing anti-trans hate. Meta is fully capable of implementing such training yet continues to fail to prioritize it, resulting in epidemic levels of anti-LGBTQ hate across its platforms.” Read GLAAD’s full public comment here.
Research on online and offline anti-LGBTQ threats and violence:
- The 2023 GLAAD Social Media Safety Index found that there are “very real resulting harms to LGBTQ people online, including a chilling effect on LGBTQ freedom of expression for fear of being targeted, and the sheer traumatic psychological impact of being relentlessly exposed to slurs and hateful conduct.”
- GLAAD’s recent Accelerating Acceptance report found that 86% of non-LGBTQ Americans agree that exposure to online hate content leads to real-world violence.
- In 2023, GLAAD and the Anti-Defamation League (ADL) recorded more than 700 incidents of violence and threats targeting LGBTQ people in the US. In a June 2023 report, the ADL and GLAAD tracked more than 130 incidents that targeted drag shows and drag performers specifically. The vast majority of these incidents made reference to “grooming,” a false and harmful narrative that high-follower anti-LGBTQ accounts consistently promote online.
- A 2022 survey from GLAAD, UltraViolet, Women’s March, and Kairos showed that a majority of Americans report seeing online threats of violence based on race, gender or sexual orientation and also experience harm by witnessing harassment against their communities, even when the posts aren’t targeting them individually.
Background on anti-LGBTQ climate in Poland:
- In a 2023 report from ILGA-Europe, Poland was ranked 42nd out of 49 countries in the European Union for LGBTQ equality
- The report notes that there is a continuing trend of rising hate speech in Poland, much of it related to trans people, and online hate speech is noted as a serious issue.
In 2023 GLAAD Social Media Safety Index All Major Social Media Platforms Fail on LGBTQ Safety
The third annual GLAAD Social Media Safety Index (SMSI) & Platform Scorecard was released in June 2023. After reviewing the platforms on 12 LGBTQ-specific indicators, all platforms received low and failing scores:
- Instagram: 63%
- Facebook: 61%
- TikTok: 57%
- YouTube: 54%
- Twitter: 33%
Key findings of the 2023 SMSI included:
- Anti-LGBTQ rhetoric on social media translates to real-world offline harms.
- Social media platforms are largely failing to mitigate this dangerous hate and disinformation and inadequately enforce their own policies.
- There is a lack of true transparency reporting from the platforms.
The 2024 GLAAD Social Media Safety Index is forthcoming in Summer 2024.
About the GLAAD Social Media Safety program:
GLAAD’s Social Media Safety program actively researches, monitors, and reports on a variety of issues facing LGBTQ social media users — with a focus on safety, privacy and expression — advocating for solutions in numerous realms. The annual Social Media Safety Index (SMSI) provides recommendations for the industry at large and reports on LGBTQ user safety across the five major social media platforms: Facebook, Instagram, Twitter/X, YouTube, and TikTok. Learn more by reading the annual GLAAD Social Media Safety Index & Platform Scorecard here.
About GLAAD:
GLAAD rewrites the script for LGBTQ acceptance. As a dynamic media force, GLAAD tackles tough issues to shape the narrative and provoke dialogue that leads to cultural change. GLAAD protects all that has been accomplished and creates a world where everyone can live the life they love. For more information, please visit www.glaad.org or connect @GLAAD on social media.
Add A Comment
Related posts
Share this
Join GLAAD and take action for acceptance.
Topics
Don't Miss
Every year, people and organizations around the country participate in Transgender Awareness Week from November…