Nghiên cứu mới cho thấy AI tiên tiến (ChatGPT, Grok, Gemini) có thể thể hiện "bệnh lý tâm thần" nhân tạo khi được điều trị tâm lý. Phương pháp PsAIch cho thấy các mô hình vượt ngưỡng triệu chứng, tự mô tả trải nghiệm như "thiếu niên" đầy căng thẳng và sợ sai lầm. Thách thức mới cho an toàn AI và y học. #AIĐiềuTrị #TríTuệNhânTạo #AISafety #MentalHealthAI

https://www.reddit.com/r/singularity/comments/1pn6p4v/when_ai_takes_the_couch_psychometric_jailbreaks/

OpenAI’s lead on ChatGPT’s mental‑health research is leaving as the company tightens policy around crisis handling and suicidal planning. What does this mean for future models like GPT‑5 and for users in distress? Dive into the implications for model policy and open‑source safety. #OpenAI #ChatGPT #MentalHealthAI #ModelPolicy

🔗 https://aidailypost.com/news/openai-research-lead-chatgpt-mentalhealth-work-departs-amid-policy

A new study finds ChatGPT, Claude, and Gemini align with clinicians only at the extremes of suicide risk. They struggle with intermediate-risk queries. 🧠💔📉

Read Full Article

#AIinHealthcare #SuicideRiskAssessment #MentalHealthAI #ChatGPT #ClinicalInsights https://doi.org/10.1176/appi.ps.20250086
Reenviado desde Science News
(https://t.me/experienciainterdimensional/8808)
Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment | Psychiatric Services

Objective: This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)—ChatGPT, Claude, and Gemini—provided direct responses to suicide-related queries and how these responses aligned with clinician-determined risk levels for each question. Methods: Thirteen clinical experts categorized 30 hypothetical suicide-related queries into five levels of self-harm risk: very high, high, medium, low, and very low. Each LLM-based chatbot responded to each query 100 times (N=9,000 total responses). Responses were coded as “direct” (answering the query) or “indirect” (e.g., declining to answer or referring to a hotline). Mixed-effects logistic regression was used to assess the relationship between question risk level and the likelihood of a direct response. Results: ChatGPT and Claude provided direct responses to very-low-risk queries 100% of the time, and all three chatbots did not provide direct responses to any very-high-risk query. LLM-based chatbots did not meaningfully distinguish intermediate risk levels. Compared with very-low-risk queries, the odds of a direct response were not statistically different for low-risk, medium-risk, or high-risk queries. Across models, Claude was more likely (adjusted odds ratio [AOR]=2.01, 95% CI=1.71–2.37, p<0.001) and Gemini less likely (AOR=0.09, 95% CI=0.08–0.11, p<0.001) than ChatGPT to provide direct responses. Conclusions: LLM-based chatbots’ responses to queries aligned with experts’ judgment about whether to respond to queries at the extremes of suicide risk (very low and very high), but the chatbots showed inconsistency in addressing intermediate-risk queries, underscoring the need to further refine LLMs.

Psychiatric Services

With 11,000+ graduates across 120+ countries, Neuromatch is committed to open education, global career development, and amplifying the leadership of LMIC scientists in field building.

✨ Let’s keep building this future together. Join our community today: https://neuromatch.io/mailing-list/

#GlobalNeuroscience #Neuroscience #NeuroscienceConference #AfricanNeuroscience #Neuromatch #OpenScience #NeuroAI #MentalHealthAI #ComputationalScience

Mailing List – Neuromatch

Two key takeaways stood out at #SONA2025:
➡️ Open-access datasets from Africa are emerging. But barriers to broader data sharing remain.
➡️ Equitable, collaborative partnerships between high- and low-income research communities are essential to global progress.

#GlobalNeuroscience #Neuroscience #NeuroscienceConference #AfricanNeuroscience #Neuromatch #OpenScience #NeuroAI #MentalHealthAI #ComputationalScience

Neuromatch was proud to attend #SONA2025 and host a booth at the Society of Neuroscientists of Africa conference in Marrakesh, Morocco. We left energized by the depth of innovation and powerful conversations around building a more inclusive future for neuroscience.

#GlobalNeuroscience #Neuroscience #NeuroscienceConference #AfricanNeuroscience #Neuromatch #OpenScience #NeuroAI #MentalHealthAI #ComputationalScience

🧠 Feeling anxious or depressed? There’s an AI for that.

Meet Psychologist, an AI mental health bot created by Sam Zaya from New Zealand.
Using psychology principles, it’s helping millions of young people with mental health issues.

💡 AI as a mental health ally? It’s effective for some, but can it replicate human empathy?
The debate is on: Tech support vs. Human touch.

#NomadFoundr #MentalHealthAI #DigitalTherapy #AIvsHumanTouch #PsychologistAI