@LouisIngenthron Ah! But my point is that users cannot be expected to understand this difference. Most of them barely understand the phones and laptops they're using. That's the bottom line.
Now, if in order to see an AIO, you need to click through a big banner that said, "THIS ANSWER MAY BE WRONG. CONSIDER IT ENTERTAINMENT ONLY. CLICKING THIS MEANS YOU UNDERSTAND THIS!" -- well, that *might* make a difference. Presentation matters.
@lauren
> But my point is that users cannot be expected to understand this difference.
That's some nanny-stateism there. If they're too stupid to understand it, then they shouldn't use it, like cars or kitchen knives or matches. It's not the state's job to ruin things because some people are too stupid to use them properly.
@lauren I've already conceded, long ago, that companies that allow such systems to speak for them should be liable for the results.
But that's very different from a chatbot with a disclaimer.
Nobody is stupid for believing a corporate bot that lies to them about a sale. But they are absolutely stupid if they try to get facts from ChatGPT, ignoring all the disclaimers, and then later rely on those "facts" in a critical situation.
@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.
I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.
@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.
And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.
But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.
@lauren Yeah, I think the search companies putting AI responses as fact at the top of results, especially when the user has not opted in nor acknowledged the danger, could be one of the cases where a company has chosen to use the bot's speech as its own and therefore becomes liable for it.
But, again, I draw the distinction between that conduct and a chatbot with a disclaimer.
@lauren @LouisIngenthron I agree here with the problem that end users don't understand LLM and as such take things that they shouldn't at face value. But should their ignorance limit me in using it do very powerful things.
Tools come with risks, powerful tools even more so. But in the end I'm mostly curious on the court cases to see how deep this rabbit hole goes.
@LPerry2 @lauren @LouisIngenthron
Not sure what the difference is between "All" and "Web."
Anyway, I switched to DuckDuckGo and am much happier.
@lauren @[email protected] There is an interesting congressional report on the interplay between generative AI and section 230 of the CDA. It looks at some of these issues. Good to have in your reference library.
@lauren I think the next logical step is to apply this question to self-driving vehicles.
... though there's likely already precedent there: who is responsible when a malfunctioning autopilot crashes a plane?
@LouisIngenthron @lauren I think you have uncovered the flaw.
The problem with generative "AI" is it is falsely advertised as resembling human intelligence. It does not. It mimics human speech patterns, thus giving the false impression that its "reasoning" is "intelligent". Its reasoning is low quality computer nerd crapola.
@LouisIngenthron @lauren That generative "AI" cannot POSSIBLY resemble human intelligence is obvious. The closest thing to human intelligence is chimpanzee intelligence, and it is very similar. Yet it obviously cannot be modeled by a language model, because chimpanzee brains have no language.
Language is an ADJUNCT capability (whose effect is non-additive, but rather exponential, but that is another topic).