TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.
@davidgerard It's really quite a thing that we have reached the “have faith, unbeliever” stage of AI already. Although these are mostly also the guys who made “HODL” a thing.
Charlie the Unicorn

YouTube

@henryk @ianbetteridge @davidgerard

A serious problem with AI in its current form is its appearance of credibility. Mostly right is FAR more dangerous than obviously wrong.

If "we" don't watch out, AI will (or may have already become) a powerful tool of gaslighting and disinformation.

Semi-offtopic.... I cannot explain why, but I loved Charlie the Unicorn vids years ago.

@Lsamuelson57 @henryk @davidgerard On the other hand, “mostly right” is about the best humans get. The real problem is a lack of critical thinking on the part of the humans, who simply believe everything a machine says.

@ianbetteridge @henryk @davidgerard

Agreed, we must accept responsibility for our decisions, AI or not.

My complaint is about people with concentrated power and private agendas that produce falsehoods for their own gain. They work carefully (too often successfully) to prevent readers from making informed choices.

For example, health-oriented information. The underlying phenomena are subtle enough that it takes a medical genius to wade through input that sounds credible but is not.

@Lsamuelson57 @ianbetteridge @henryk @davidgerard Yeah I think these LLMs are far worse than “mostly right.” More like “almost certainly wrong ‘somewhere’ but unless you’re an expert in the thing you’re asking about you won’t be able to easily determine where.” And of course when health care is involved: “and you could die.”