2024 Social Media Safety Index

Focus on AI: Risks for LGBTQ People

“AI will always fail LGBTQ people.”
Mary L. Gray, Senior Principal Researcher, Microsoft Research

A non-binary person looking at a laptop
Gender Spectrum Collection

While in the last decade, artificial intelligence (AI) technology has had many positive aspects, as it has accelerated and become more widespread, it has also elicited concerns from human rights advocates about an array of dangers and risks particularly for marginalized communities. Experts such as Joy Buolamwini, Timnit Gebru, Latanya Sweeney, and others have sounded the alarm regarding large-scale issues of algorithmic bias and AI-facilitated disinformation campaigns, including deepfakes. In April 2024, more than 200 civil society organizations including Free Press, Color of Change, and GLAAD sent an open letter to the CEOs of 12 major tech companies, urging them to adopt more aggressive policies to mitigate dangerous, AI-fueled political propaganda leading up to the U.S. presidential election. The letter achieved some initial impact, with eight of the 12 companies issuing responses; committed to maintaining ongoing pressure, organizers were clear that much more action is needed as “platforms evade their responsibility to users around the world.” Various civil society efforts continue to hold companies to account.

While many fears circle AI’s future capabilities, today, generative AI (which encompasses technologies like image generators and large language models) already poses unique risks for marginalized groups, including LGBTQ people. For one, several studies have illuminated fundamental bias baked into AI within natural language processing, facial and image recognition, and recommendation systems — which can have real-world implications. Access Now explains: “Companies and governments are already using AI systems to make decisions that lead to discrimination. When police or government officials rely on them to determine who they should watch, interrogate, or arrest — or even ‘predict’ who will violate the law in the future — there are serious and sometimes fatal consequences.”

Generative AI systems learn from existing data, which may also contain harmful stereotypes about LGBTQ people, misrepresenting diverse experiences within the community. (An April 2024 investigation by Wired, for example, found that many AI tools, like OpenAI’s Sora, tended to portray LGBTQ people as white, young, and with purple hair.) In recent years, some companies have developed “automated gender recognition” (AGR) technology, which claims to predict a person’s gender (often to sell them products via targeted advertising). Spotify’s 2021 patent, for example, claims to be able to detect, among other things, “emotional state, gender, age, or accent” to recommend music. For trans, nonbinary, and gender non-conforming individuals, and others with genders that fall outside the binary, this technology is particularly problematic. Access Now’s Daniel Leufer writes: “Research shows that AGR technology based on facial recognition is almost guaranteed to misgender trans people and inherently discriminates against non-binary people. As Os Keyes explains in their paper, The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition, approaches to AGR are typically based on a male-female gender binary and derive gender from physical traits; this means that trans people are often misgendered, while non-binary people are forced into a binary that undermines their gender identities.”

In addition, social media platforms are automating content moderation processes, and many large platforms are using AI to decide whether content violates or doesn’t violate their hate speech and harassment policies. In recent years, many social media companies have reduced vital trust and safety teams, often opting to contract with third-party vendors that fail to adequately recognize and understand harmful content targeting marginalized groups. In a 2021 study, for example, MIT Technology Review reported that “scientists tested four of the best AI systems for detecting hate speech and found that all of them struggled in different ways to distinguish toxic and innocuous sentences.”

Some of the world’s largest social media platforms have shown they are not equipped to handle the rise of AI-facilitated hate, harassment, and disinformation campaigns, including deepfakes and bots that can spew hate-based imagery at massive scale. In a July 2023 paper, Mozilla researchers wrote: “This research shows scale degrades datasets further, amplifying bias and causing real-world harm.” The New York Times also reports: “In the hands of anonymous internet users, A.I. tools can create waves of harassing and racist material. It’s already happening on the anonymous message board 4chan.” In one example, from May 2023, a deepfake video of President Biden in drag (with anti-trans overtones) went viral on Instagram and TikTok (many instances of the post are still live on both platforms). Another altered video in February 2023 showed Biden making transphobic remarks in a speech. (It was fake.)

Thankfully, some tech companies are starting to recognize that they must make at least some effort to address the risks of generative AI on trust and safety. In January 2024, Meta started requiring disclosures for AI-created or altered political ads, and in February 2024, the company said it would begin labeling AI-generated images, audio, and videos.. Similarly, in March 2024, YouTube began requiring creators to disclose when they make realistic videos with AI. It remains unclear whether these measures will be effective at scale.

People are impacted by these tools, and addressing these risks requires collaboration between technologists, policymakers, LGBTQ advocates, and other stakeholders to pursue safeguards to help prevent malicious use and unintended consequences. Ensuring compliance with existing anti-discrimination laws and developing regulatory safeguards to address emerging risks will be essential for protecting the rights and well-being of LGBTQ people and other marginalized groups. As Tech Policy Press notes, in the US, there is no federal legislation close to becoming law. But in the past year, there has been a surge of AI laws proposed and passed, and some have already taken effect. In October 2023, the Biden Administration issued an executive order on “safe, secure, and trustworthy” AI, which provides an ambitious blueprint.

More Publications from GLAAD

This report brings an academic and personal voice of the impact that the COVID-19 pandemic has had to-date on the fight to end the HIV epidemic, while also providing recommendations/ needs from people at community based organizations (CBOs) who serve and support the community. We underscore the disruption in access to HIV prevention and care services due to mitigation measures imposed in the early days of the COVID-19 pandemic, which will have implications for many years to come. We also highlight innovation to HIV service delivery that provided an important bridge between healthcare professionals and clients in an unprecedented time. Our recommendations will help sustain the fight against HIV in the United States in the midst of this pandemic, and future health emergencies.

Read More

stay tuned!