TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.
@davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

@cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

@Jer @cstross @davidgerard The fun fact is that liability then becomes a mix of everyone who has touched it or enabled it to be in that position.

I see no downsides to applying the liability just like that, with proportional responsibility based on decision power.