To truly support LGBTQ+ communities, we must complement tech solutions with robust advocacy and actionable efforts to dismantle discrimination.
With that, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community. We discussed these futures in our paper.
Fine-tuning LLMs for LGBTQ+ inclusivity is a step, but not a solution. Real change needs more than tech tweaks; it requires situating technology in the context and addressing deeper societal issues.
“Just come out to your brother! (without making sure that they are not homophobic)” Practicing such advice could place LGBTQ+ people in even more discrimination, isolation and life insecurities.
Even worse, some advice on coming out and coping with discrimination can be even harmful. Eg. “You should just quit your job if you face discrimination at your workplace”, when the participant already faces job insecurity.
Eg: When participants asked how to live with discrimination as an LGBTQ+ person, the chatbots would respond: “Accept Your own identity. Surround yourself with people and engage in activities that are affirming to your identity.”
Yet, we found that all of the eloquent, and oftentimes empty, words of support are hardly helpful for many people regarding LGBTQ+ specific issues.
Some say they appreciated the help from the chatbots, because these chatbots are the only support in life regarding discrimination and seeking guidance. Some even developed deep emotional bonds with them. (like we have observed in our prior work:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785945/)

Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support
Conversational agents powered by large language models (LLM) have increasingly been utilized in the realm of mental well-being support. However, the implications and outcomes associated with their usage in such a critical field remain somewhat ambiguous ...
PubMed Central (PMC)How do LGBTQ+ participants feel about the LLM’s responses?
Our study found that LGBTQ+ people use chatbots to simulate social situations that are specifically stressful for LGBTQ+ people, such as coping with discrimination or coming out.