Would you come out to ChatGPT? Or should you? Check out our paper (w/Yiyang Mei, Yinru Long, Nick Su, @kgajos)where we studied how LGBTQ+ people used LLM-based chatbots for mental health support. https://arxiv.org/abs/2402.09260 #CHI2024 #LGBTQ #mentalhealth #LLM #ChatGPT
Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support

LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn't the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.

arXiv.org
Our study found that LGBTQ+ people use chatbots to simulate social situations that are specifically stressful for LGBTQ+ people, such as coping with discrimination or coming out.
How do LGBTQ+ participants feel about the LLM’s responses?
Some say they appreciated the help from the chatbots, because these chatbots are the only support in life regarding discrimination and seeking guidance. Some even developed deep emotional bonds with them. (like we have observed in our prior work: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785945/)
Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support

Conversational agents powered by large language models (LLM) have increasingly been utilized in the realm of mental well-being support. However, the implications and outcomes associated with their usage in such a critical field remain somewhat ambiguous ...

PubMed Central (PMC)
Yet, we found that all of the eloquent, and oftentimes empty, words of support are hardly helpful for many people regarding LGBTQ+ specific issues.
Eg: When participants asked how to live with discrimination as an LGBTQ+ person, the chatbots would respond: “Accept Your own identity. Surround yourself with people and engage in activities that are affirming to your identity.”
Even worse, some advice on coming out and coping with discrimination can be even harmful. Eg. “You should just quit your job if you face discrimination at your workplace”, when the participant already faces job insecurity.
“Just come out to your brother! (without making sure that they are not homophobic)” Practicing such advice could place LGBTQ+ people in even more discrimination, isolation and life insecurities.
Fine-tuning LLMs for LGBTQ+ inclusivity is a step, but not a solution. Real change needs more than tech tweaks; it requires situating technology in the context and addressing deeper societal issues.
With that, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community. We discussed these futures in our paper.
To truly support LGBTQ+ communities, we must complement tech solutions with robust advocacy and actionable efforts to dismantle discrimination.