@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.
I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.
@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.
And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.
But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.