TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.
@davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

@cstross @davidgerard

Hmm, and presumably anyone operating a general-purpose chatbot that could conceivably be prompted to give such advice (e.g. as the conversational interface to a regular web-page) are also plausibly at risk?

@dwm @davidgerard Yes, although it all depends on whether the GMC (and the Police) have the guts to go after a large foreign corporation with deep pockets. It probably won't happen unless there's a major death-related scandal and/or one of the aforementioned corporations decides to go after the competition, i.e. small locally run and/or open source models with broad training sets.
@cstross @davidgerard Any coin can give medical advice. I just ask the coin: should I take this medicine, say "head". Then I throw the coin. I hope the people at the coin minting facility get imprisoned for that.
@waffelhard @cstross @davidgerard …as coins are often claimed to be able to replace doctors by the coin minting industry and its adherents.
@waffelhard @cstross @davidgerard this is not the clever response you think it is.
@tedmielczarek I just love it when other people claim to know what I think!
@cstross @davidgerard needs an IANAD subroutine.
@cstross @davidgerard
I think that applies to veterinary advice, but not to human. Hence Chiropractic, Homeopathy, assorted woo.
When people complain to the GMC that someone ^^^ is giving bad advice, the GMC says that they only have powers over registered medical practitioners.
But there are laws about animals.

@cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

@Jer @cstross @davidgerard it's the CEOs job to manage legal risk. Imprison the CEO.
@wronglang @cstross @davidgerard I actually agree. It would certainly justify the vast amounts of money they make if they had to take personal responsibility for their harmful decisions. Might make them think a little harder about their decisions
@Jer @cstross @davidgerard I'm into it and I'm also not sure it's necessary. A corporation is just a bunch of greedy people in a trench coat. If you hurt the board with financial consequences for the company that CEO is going to get hurt in the way the care about the most. The broader problem is that we don't properly enforce consequences for companies at all even when the law is pretty clear.
@wronglang @Jer @davidgerard No, the CEO is only hurt *very indirectly* and usually they'll have moved on to another job (with better pay/options) before the pigeons come home to roost. Consider it took more than two decades for the OxyContin scandal to lead to court verdicts, and the Purdue owners still escaped most liability for thousands of deaths by declaring bankruptcy. How many CEOs did Purdue have during that period?

@cstross @wronglang @Jer @davidgerard

Exactly.

When the board votes out a CEO, they lose all unvested stock. All of the salary that they’ve received and all stock that they have that has vested remains theirs. This is normally (for a moderately large company) enough money to live comfortably for the rest of their lives without working.

I would happily endure this ‘punishment’.

@david_chisnall @cstross @Jer @davidgerard yes but we only do mock punishments

edit: my point is that both are options, if we're talking about modifying the law either mechanism could work iff actually applied.

@wronglang @cstross @Jer @davidgerard

You possibly could treat all compensation paid to the CEO while the company knowingly engaged in illegal activity as the proceeds of immoral earnings. There are existing laws that allow such money to be confiscated.

@david_chisnall @cstross @Jer @davidgerard I really wouldn't mind making those laws stronger... and the fact we prosecute shoplifting food but fail to enforce these laws is a bigger problem
@wronglang @david_chisnall @Jer @davidgerard I think a more urgent need is to globally abolish corporate personhood and apply criminal liability law for corporate harms to the individuals who caused the harm. Cut back companies to being a money shelter again, but not a responsibility shelter.

@cstross @wronglang @Jer @davidgerard

The laws in most places allow prosecuting individual members of a company, the difficulty is proving who in a diffuse group that all signed off on part of something is actually responsible. Targeting the company in addition is intended to act as a disincentive by applying financial penalties that make the cost:benefit calculations different. Sadly, the costs are rarely high enough to matter.

The only de jure liability shield that incorporation gives is for shareholders. And this can go away in some cases. Both the UK and USA have a legal notion of ‘piercing the corporate veil’ that can, in extreme cases, make the owners of a company liable.

@david_chisnall @wronglang @Jer @davidgerard That right there is where we need to lean hard into applying the "joint enterprise" doctrine in prosecution. *Everybody* who signed off on it is responsible. If it's a committee? Fine, the committee goes to prison unless they can individually point to a paper trail documenting their objections.

@cstross @wronglang @Jer @davidgerard

And the minimum-wage person who actually did the illegal thing, but was threatened with being fired and losing their home if they didn’t? And the paper trail that says everyone on the committee voted against it, but this rogue employee did the illegal thing unsupervised?

Whistleblower protections would need to be orders of magnitude stronger for this to be enforceable (something I would be very much in favour of).

@david_chisnall @wronglang @Jer @davidgerard Yep, we need stronger whistleblower protections. An assumption that "blame the messenger" is the default company response to whistle-blowing should be baked-in and determine the outcome of wrongful dismissal cases for *any cause whatsoever* for several years after the incident.

@david_chisnall @cstross @wronglang @Jer @davidgerard

The only de jure liability shield that incorporation gives is for shareholders.

Any shareholders that had voting rights and voted for doing illegal shit should also be hit with the same legal liability.

Benefiting from the proceeds of crime, especially crime one ordered is not a protection from liability for it.

@cstross @Jer @davidgerard no, *actually* hurt the company enough to hurt the board, make it clear that the CEO's judgement makes them a bad hire. We do this too little so of course CEOs just float around on golden parachutes.

@Jer @davidgerard That's a broader corporate liability question. Personally I'd LIKE to see the C-suite and boards of corporations that kill people sentenced to serious prison time. (Lower level staff too, but only if it's found that they made decisions that led to deaths on their own initiative. The directors *are responsible for the company's actions*.)

Going further: the current privileged legal status of corporations is an obscenity and needs to be de-legitimized.

@cstross @Jer @davidgerard
We already have exactly this for some regulations like PCI-DSS. It's funny how we can get that sort of things when it protects an industry like the credit card industry.
@Jer @cstross @davidgerard The fun fact is that liability then becomes a mix of everyone who has touched it or enabled it to be in that position.

I see no downsides to applying the liability just like that, with proportional responsibility based on decision power.
@Jer @cstross @davidgerard Imprison the company itself as a legal entity. Freeze all of its assets and cease all activity for the duration of the sentence.
@mikeash @Jer @davidgerard Depending on scale and type of company, congratulations: you just laid of tens to thousands of uninvolved people *and* fucked over their suppliers and customers because a handful of dipshits broke the law. (This is why corporations can get away with these stunts in the first place.)
@cstross @Jer @davidgerard Such a system would strongly discourage companies from growing beyond a certain size. Not sure what the economic effects would be, but it’s interesting to consider.
@Jer @cstross @davidgerard "society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable" See also: Horizon / Fujitsu

@cstross @davidgerard

Well, its also just as absurd that, in states like New York, even offering ibuprofen to a friend who has a headache is a felony of "practicing medicine without a license".

And a garbage doctor with multiple malpractice suits is still a doctor.

And, if I did exhaust all other avenues, yeah, I would investigate a unknown medical condition with an LLM. I wouldn't trust US AI though.

I have a friend whose wife had that exact thing. Tests after tests had no answer to what was wrong. They got the medical record and fed it into ChatGPT and it provided a differential diagnosis. They took the output and brought it to a human doc, and validated the top disease as correct.

Its way too easy to oversimplify to "slop machine" or "do everything machine". Its neither, but something much more complex and weird.

@cstross @davidgerard

In some parts of the world, it is an offence to give legal advice without being an actual lawyer. But that doesn't seem to stop some lawyers from using LLMs for generating legal documents full of slop.

@cstross @davidgerard I feel like somebody should have learned from the tons of made up citations that lawyers are experiencing. AI doesn't have a law degree, either.
@davidgerard It's really quite a thing that we have reached the “have faith, unbeliever” stage of AI already. Although these are mostly also the guys who made “HODL” a thing.
Charlie the Unicorn

YouTube

@henryk @ianbetteridge @davidgerard

A serious problem with AI in its current form is its appearance of credibility. Mostly right is FAR more dangerous than obviously wrong.

If "we" don't watch out, AI will (or may have already become) a powerful tool of gaslighting and disinformation.

Semi-offtopic.... I cannot explain why, but I loved Charlie the Unicorn vids years ago.

@Lsamuelson57 @henryk @davidgerard On the other hand, “mostly right” is about the best humans get. The real problem is a lack of critical thinking on the part of the humans, who simply believe everything a machine says.

@ianbetteridge @henryk @davidgerard

Agreed, we must accept responsibility for our decisions, AI or not.

My complaint is about people with concentrated power and private agendas that produce falsehoods for their own gain. They work carefully (too often successfully) to prevent readers from making informed choices.

For example, health-oriented information. The underlying phenomena are subtle enough that it takes a medical genius to wade through input that sounds credible but is not.

@Lsamuelson57 @ianbetteridge @henryk @davidgerard Yeah I think these LLMs are far worse than “mostly right.” More like “almost certainly wrong ‘somewhere’ but unless you’re an expert in the thing you’re asking about you won’t be able to easily determine where.” And of course when health care is involved: “and you could die.”

@ianbetteridge @Lsamuelson57 @henryk @davidgerard

I think we will also see an asymmetry in the burden of proof: The human doctors will need to be right 100% of the time or the AI boosters will write them off, but the AI just has to not be 100% terrible.

Kind of like how a lot of people in the USA insist Democrats have to be perfect, and Republicans just need to have a pulse.

@ianbetteridge @davidgerard for some of these guys its literally the same thing - they need the ai bubble to stay inflated until they can cash out

But I'm worried that many of them are less cynical and more sincere. As a long time watcher of cults, the agi guys already had the signs but the culty attitudes towards ai are showing up outside that bubble now

@ianbetteridge @davidgerard
People who sell beanie babies don't want people to lose faith in beanie babies

@davidgerard

Everyone should have access to medical professionals who take their problems seriously.

If they have that and still ask ChatGPT for medical advice... sigh

@davidgerard @Mab_813 A real problem (in so many ways) is that too many people who have access to medical professionals don’t have their problems taken seriously. In which cases, turning to ChatGPT is understandable, as was turning to “alternative medicine” before that. Usually regrettable, but understandable.

@davidgerard

There is a bill in New York to make any companies that deploy chat bots that act like licensed professionals liable in the same way as those professionals:

https://www.nysenate.gov/legislation/bills/2025/S7263

NY State Senate Bill 2025-S7263

Imposes liability for damages caused by a chatbot impersonating certain licensed professionals.

NYSenate.gov

@davidgerard this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them.

an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.

@davidgerard I know that alternative medicine has a body count; I’ve seen it in the flesh. I know what some of the horseshit on the Internet can do if you’re very desperate or very trusting.

the LLM lowers the trust barrier because the crank information is no longer crank flavored, but it’s still dangerous as fuck to follow the advice.

I keep seeing LLMs be presented as better than nothing and that’s wrong. I wish the people who needed help could get it, but the LLM is worse than nothing.

@davidgerard LLMs get alternative medicine patients to the “I don’t care what you say, *I* feel better” point of no return so much quicker because they don’t know it’s alternative medicine. some of it might even be legitimate medicine that works! and all this does is make them less skeptical until they get output that’s plausible but fatal, or until the damage from what they’ve been doing builds up and they can’t survive anymore. and thanks to the LLM, they’ll fight off anyone who tries to help.
@zzt @davidgerard Lies are never more effective than when they're sprinkled with truth, and that's exactly the bread and butter of LLMs: truth-flavoured bullshit.

@zzt @davidgerard I'm pleased to inform you the body counters at http://whatstheharm.net are still online

edit: wow though, no https ... now that's what I call web 1.0

What's The Harm?

This is a list of topics in which we have found stories where a lack of critical thinking has caused unnecessary harm, death, injury, hospitalizations, major financial loss or other damages.

@zzt

I hear you. I am sorry about your family members who died.

@zzt @davidgerard

Carer: What the fuck? I did what you told me and they’re dead!

AI: You’re right, that one’s on me. When I said you should give them a gram of arsenic, what I should have said was *not* to give them a gram of arsenic. I’ll do better and work harder…

So, after not giving them a gram of arsenic, it’s now time to give them a relaxing cup of tea, and then read their tea leaves – I see good things happening for them on my treatment regime today.