This LLM often hallucinates. It fails to handle intense inputs and may simply freeze at crucial moments. It runs on outdated software and data.

Yet we don't define clear validation to mitigate hallucination. Nor consider its physical limitations when it comes to inputs. Or do the work to update its software and data.

I'm talking about non-artificial intelligence, of course: the human.

Gartner says #AI investment will reach $2.5 trillion in 2026. Imagine if we spent that on #mentalhealth.