Hmm, and presumably anyone operating a general-purpose chatbot that could conceivably be prompted to give such advice (e.g. as the conversational interface to a regular web-page) are also plausibly at risk?
@cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?
One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable
That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors
@cstross @wronglang @Jer @davidgerard
Exactly.
When the board votes out a CEO, they lose all unvested stock. All of the salary that they’ve received and all stock that they have that has vested remains theirs. This is normally (for a moderately large company) enough money to live comfortably for the rest of their lives without working.
I would happily endure this ‘punishment’.
@david_chisnall @cstross @Jer @davidgerard yes but we only do mock punishments
edit: my point is that both are options, if we're talking about modifying the law either mechanism could work iff actually applied.
@wronglang @cstross @Jer @davidgerard
You possibly could treat all compensation paid to the CEO while the company knowingly engaged in illegal activity as the proceeds of immoral earnings. There are existing laws that allow such money to be confiscated.
@cstross @wronglang @Jer @davidgerard
The laws in most places allow prosecuting individual members of a company, the difficulty is proving who in a diffuse group that all signed off on part of something is actually responsible. Targeting the company in addition is intended to act as a disincentive by applying financial penalties that make the cost:benefit calculations different. Sadly, the costs are rarely high enough to matter.
The only de jure liability shield that incorporation gives is for shareholders. And this can go away in some cases. Both the UK and USA have a legal notion of ‘piercing the corporate veil’ that can, in extreme cases, make the owners of a company liable.
@cstross @wronglang @Jer @davidgerard
And the minimum-wage person who actually did the illegal thing, but was threatened with being fired and losing their home if they didn’t? And the paper trail that says everyone on the committee voted against it, but this rogue employee did the illegal thing unsupervised?
Whistleblower protections would need to be orders of magnitude stronger for this to be enforceable (something I would be very much in favour of).
@david_chisnall @cstross @wronglang @Jer @davidgerard
The only de jure liability shield that incorporation gives is for shareholders.
Any shareholders that had voting rights and voted for doing illegal shit should also be hit with the same legal liability.
Benefiting from the proceeds of crime, especially crime one ordered is not a protection from liability for it.
@Jer @davidgerard That's a broader corporate liability question. Personally I'd LIKE to see the C-suite and boards of corporations that kill people sentenced to serious prison time. (Lower level staff too, but only if it's found that they made decisions that led to deaths on their own initiative. The directors *are responsible for the company's actions*.)
Going further: the current privileged legal status of corporations is an obscenity and needs to be de-legitimized.
Well, its also just as absurd that, in states like New York, even offering ibuprofen to a friend who has a headache is a felony of "practicing medicine without a license".
And a garbage doctor with multiple malpractice suits is still a doctor.
And, if I did exhaust all other avenues, yeah, I would investigate a unknown medical condition with an LLM. I wouldn't trust US AI though.
I have a friend whose wife had that exact thing. Tests after tests had no answer to what was wrong. They got the medical record and fed it into ChatGPT and it provided a differential diagnosis. They took the output and brought it to a human doc, and validated the top disease as correct.
Its way too easy to oversimplify to "slop machine" or "do everything machine". Its neither, but something much more complex and weird.
In some parts of the world, it is an offence to give legal advice without being an actual lawyer. But that doesn't seem to stop some lawyers from using LLMs for generating legal documents full of slop.

@henryk @ianbetteridge @davidgerard
A serious problem with AI in its current form is its appearance of credibility. Mostly right is FAR more dangerous than obviously wrong.
If "we" don't watch out, AI will (or may have already become) a powerful tool of gaslighting and disinformation.
Semi-offtopic.... I cannot explain why, but I loved Charlie the Unicorn vids years ago.
@ianbetteridge @henryk @davidgerard
Agreed, we must accept responsibility for our decisions, AI or not.
My complaint is about people with concentrated power and private agendas that produce falsehoods for their own gain. They work carefully (too often successfully) to prevent readers from making informed choices.
For example, health-oriented information. The underlying phenomena are subtle enough that it takes a medical genius to wade through input that sounds credible but is not.
@ianbetteridge @Lsamuelson57 @henryk @davidgerard
I think we will also see an asymmetry in the burden of proof: The human doctors will need to be right 100% of the time or the AI boosters will write them off, but the AI just has to not be 100% terrible.
Kind of like how a lot of people in the USA insist Democrats have to be perfect, and Republicans just need to have a pulse.
@ianbetteridge @davidgerard for some of these guys its literally the same thing - they need the ai bubble to stay inflated until they can cash out
But I'm worried that many of them are less cynical and more sincere. As a long time watcher of cults, the agi guys already had the signs but the culty attitudes towards ai are showing up outside that bubble now
Everyone should have access to medical professionals who take their problems seriously.
If they have that and still ask ChatGPT for medical advice... sigh
There is a bill in New York to make any companies that deploy chat bots that act like licensed professionals liable in the same way as those professionals:
@davidgerard this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them.
an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.
@davidgerard I know that alternative medicine has a body count; I’ve seen it in the flesh. I know what some of the horseshit on the Internet can do if you’re very desperate or very trusting.
the LLM lowers the trust barrier because the crank information is no longer crank flavored, but it’s still dangerous as fuck to follow the advice.
I keep seeing LLMs be presented as better than nothing and that’s wrong. I wish the people who needed help could get it, but the LLM is worse than nothing.
@zzt @davidgerard I'm pleased to inform you the body counters at http://whatstheharm.net are still online
edit: wow though, no https ... now that's what I call web 1.0
I hear you. I am sorry about your family members who died.
Carer: What the fuck? I did what you told me and they’re dead!
AI: You’re right, that one’s on me. When I said you should give them a gram of arsenic, what I should have said was *not* to give them a gram of arsenic. I’ll do better and work harder…
So, after not giving them a gram of arsenic, it’s now time to give them a relaxing cup of tea, and then read their tea leaves – I see good things happening for them on my treatment regime today.