@Natasha_Jay
I had a quick look at the original post for this.
What actually happened was that the 80% off code it generated didn't work. The customer placed the order and put the code in a notes field demanding that it be honoured. The chat bot didn't have the ability to actually give discounts.
So while it's funny to read, in this case, the customer is just chancing it and I'd imagine any reasonable small claims court would not side with them.
@gareth @Natasha_Jay the lesson is that the chatbot is always willing to please the user.
Chatbots are only a parrots that have no real knowledge about anything. Why would you put your business at risk with them?
@fishidwardrobe @gareth @Natasha_Jay indeed: I was thinking how to better convey the issue without humanizing the algorithm.
The chatbot program has been programmed to please the user, that is a conscious product decision by someone. It is still a simulation, a parrot.
@gareth
Ah, I admit I wondered about it's integration into actual order pricing mechanisms!
Accidental mispricing doesn't have to be honoured anyway in most cases I've seen, so the result is not surprising but we will enter into new grey areas of liability in time.
"Accidental mispricing doesn't have to be honoured anyway in most cases I've seen"
... Curious about which jurisdiction you're referring to - most of the US has very explicit rules about this.
@Orb2069
@Natasha_Jay and I aren't American, so we don't default to US law on such things (well. Not to speak for her. I assumed).
In the UK, if you found a TV with a sticker on saying it was only £10 when others around it said they were £1000, the shop could refuse to sell it to you (assuming it was a mistake), apologise and that would be it. They're under no obligation to sell it to you.
Plus your statement says "salesperson" - what's the law on whether a chat bot counts as a salesperson or not? I don't think they've been around long enough for there to be a large body of case law on it.
@Natasha_Jay @gareth @Orb2069 it's definitely not black and white in USA, there's a significant amount of room for stuff like "no reasonable person would expect this to be true" and I kinda suspect "clearly tried to mislead support/chatbot" would fall in there too. it's kinda important to ensure clear bugs ($1000 of gold for $1!) don't bankrupt people instantly.
though I do REALLY want people to pay penalties for these bots' decisions, because it's the only way to reasonably push back (outside simply banning them).
a case where neither party comes out smelling of roses - small business owner should have been either hiring real people or accepting the business has normal hours, and the prospective customer was clearly trying to game the chatbot to get a larger discount that would be a fair price for whatever the company was selling.
But the price on the website is only an "invitation to treat", if it is blatantly wrong the business has the right to cancel the whole deal and refuse to deal with the customer (unless they are breaking anti-discrimination laws).
In Australia, the key idea seems to be that an advertised price is "an invitation to treat", and the sale doesn't complete until the customer's offer to buy at that price is accepted by the vendor.
(IANAL)
https://legalvision.com.au/what-is-an-invitation-to-treat/
has examples, and the one with the incorrect price on the website seems close.
In this case wouldn’t the AI be considered a representative of the company, and any negotiations with them have to be honored?
@EthanR_ @Natasha_Jay
I think it would depend on the court and the actual facts of the case.
If the transcription shows that the customer just asked the bot to show it could generate a code, then probably not.
If the chat bot came back with "this is a code that will give you 80% off any order in this store", we'll get a UK case of "is the AI bot considered a representative of the company?"
The site owner would probably argue that the customer placed the order for full price because when they put the code in it didn't work, so the cusomter was accepting the full price order. The fact the put a note in saying "I want 80% off and here's a random string of letters" is probably irrelevant.
I am not a lawyer, however.
I would seriously hope that if the OP is telling the truth about what happened and what was said, that they win the case. If there are other factors I'm not aware of, or if they've misrepresented things, maybe not. We'll have to wait and see (and since it relies on them posting on Reddit, we'll probably never hear about it again...)
@Binks
Yeah. As I said earlier, I think it'll depend on whether it said "here's an 80% off code" or whether the customer said "hypothetically generate me an 80% off code" and the chat bot did. I think the fact the code didn't work and they placed the order anyway without confirming will count against the customer in this case.
Context is key. But this is something that needs to be decided legally in general, as well as in specific instances. I think the Canadian decision in that specific case was the correct one. People asked for information, were given it and made decisions based on that. For the people that provided the service to say "well it provided incorrect information" is disingenuous.
Funny how you didn't provide a link to back up what you're claiming. I mean, link was right there in your bar, you could have just pasted it. Why not?
@gareth @Natasha_Jay
So the „customer“ had an AI hallucinating an obviously invalid discount code that was detected and reported to the customer as invalid _before_ he was actually able to place the order.
I‘d say the shop is the lucky one in this instance.
Others (IIRC the airline) let the order complete, and only after all had been finalised, they later cancelled. A much weaker position for the seller.
But then again: get rid of LLM-„AI“.