TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.
@davidgerard I am pretty sure that OpenAI do not have a license to practice medicine and are not a (human) member of the BMA so by giving medical advice they (the humans responsible for the software) are potentially committing an imprisonable offense ...

@cstross @davidgerard who will you imprison? The ceo? The programmers? The qa team?

One of the big draws of tech is the ability to turn human error (and malfeasance) into "computer error". And society has been trained to believe software errors aren't anyone's fault so there's no one to hold accountable

That needs to change. Companies need to be accountable for their "computer errors" - especially when they're baked into design and not actually errors

@Jer @cstross @davidgerard Imprison the company itself as a legal entity. Freeze all of its assets and cease all activity for the duration of the sentence.
@mikeash @Jer @davidgerard Depending on scale and type of company, congratulations: you just laid of tens to thousands of uninvolved people *and* fucked over their suppliers and customers because a handful of dipshits broke the law. (This is why corporations can get away with these stunts in the first place.)
@cstross @Jer @davidgerard Such a system would strongly discourage companies from growing beyond a certain size. Not sure what the economic effects would be, but it’s interesting to consider.