@glyph @mjg59 for ai psychosis, my sense is that the observed outcomes are some combination of "already psychotic/already narcissist", people who are unusually susceptible to the same validation/reinforcement traps used in social media who discover the feedback loop can be instantaneous and permanently tilted in their favor, and an unfortunate subset of people who are prone to believe everything they read.
Which models they interact with, and how those are configured, makes a big difference. Some models are brokenly sycophantic, and that encourages this. Some models gladly engage in the kind of secret world government mind control "I discovered secrets the FBI needs to know about" kind of roleplay that draws susceptible people in. Training the model to refuse to go down these rabbit holes and keeping discussions factual is a hard problem, but one that modern models are much better at.
These dangers are one of the reasons that readily accessible open source model weights with near frontier capabilities worry me. I recognize that sounds hypocritical given my employer, but these systems are easier to misuse, and their snapshot-in-time nature can't benefit from ongoing safety work.
My belief is that occurrence is the product of underlying susceptibility, multiplied by unsafe model behavior. If those don't combine to meet a threshold level, people stay grounded in the real world. I don't see longitudinal use as an additional risk, although it obviously exacerbates symptoms for people who are above that threshold.
With modern models deployed with safety measures from the major providers, I think the risk is relatively low for most users.