I don't want to laugh at someone's real distress but this IS very funny ...
No railguards in this chatbot 🤷‍♂️
@Frank_Juston safeguarding AI is a myth. Fundamentally you can't prevent attacks like that against LLMs.
@mxk @Frank_Juston AI firewalls exist.
@vnkr @mxk @Frank_Juston In your dreams...

@CyberPunker @vnkr @mxk @Frank_Juston no, wait, I've got this!

So I have this rock that scares away tigers....

@vnkr @mxk @Frank_Juston

The computer science industry has spent decades protecting against fraud, the public sector has spent millennia protecting against fraud, and AI both completely fails to respond to any mechanism from these areas and is also so new nobody has even had time to think of anything even sightly robust.

This is a problem like trying to stop your employees from doing stupid things if the employees were also ephemeral and couldn't have any kind of consequences, easily tricked, stupid, lacking any human nature that might make them predictable, and lacking pretty much all of the senses a human has to ingest data, instead seeing everything in a previously entirely unknown abstract mess that both muddles everything together and discards half of the information.

You will not be able to make a reliable chatbot anytime soon.

@anselmschueler @vnkr @mxk @Frank_Juston I think you pointed out the key though, just like you can't "force" workers to be reliable you can hold them accountable for actions like this, we just need to have a legal framework by which we can hold the corporations building the AI systems liable for the damages their tools cause civilly and possibly criminally.

@Malfunct Which won't happen under the current "AI" companies, because that puts them out of business.

There is no such thing as a safeguarded LLM. The people who claim to have one to sell you are lying.

@jmax well if they can't at least be as responsible as an employee for their actions they shouldn't exist. The legal framework needs to be built whether AI survives it or not.

I happen to agree with you about the fact the LLM can't be safeguarded, but the agent service they interact with to take actions can, it is just classic programming and can be secured in classic ways. An LLM tries to invent a discount code, it is denied like a unprivileged employee would.

@Malfunct @jmax
Corporations are people, that was a whole thing to give them rights…so they have responsibilities and the whole corporate suite should be held accountable.
@jmax @Malfunct Someone helpfully posted a link to the reddit thread, that contained this gem:
@StryderNotavi
But then the guy should be fine anyway. You can't just make up discounts at checkout and make them valid by posting them in the comments.
@jmax @Malfunct

@jmax @Malfunct @rlcw Pretty much. The buyer just managed to sound intimidating enough when threatening to go to court that the seller wasn't sure.

He spent a full hour chatting to the bot in what could only be a deliberate attempt to trick it into offering a "discount code" and then pasted it into the order comments demanding it be honoured.

Not sure if he's cooked enough to believe he has a leg to stand on, or if he was hoping he could bravado his way through if he sounded authoritative enough.

@Malfunct
Yes. Well said. This would be a BIG - enormous - change, but maybe it has to happen for #AI/#GenerativeAI/#LLM to become commercially viable.

#LargeLanguageModels #LegalLiability

@anselmschueler @vnkr @mxk @Frank_Juston

@Malfunct @anselmschueler @vnkr @mxk @Frank_Juston If only someone had thought of this concept half a century ago... 🤔 🤦

@dalias @Malfunct @anselmschueler @vnkr @mxk @Frank_Juston

The Anakin Padme Star Wars meme:
Anakin says, "Unaccountable computers can't make management decisions."
Padme asks, "So human managers will be held accountable for their decisions?"
Anakin does not respond.
Padme, with a sad and confused face asks, "The humans will be held accountable, right?"

@vnkr @mxk @Frank_Juston The more you overthink the plumbing, the easier for us to stop up the drains.
@mxk @Frank_Juston I mean if your goal here is to avoid having legal liability, you could just put up a giant banner on the chat window saying something along the lines "ANY DISCOUNT OR PRICE GIVEN TO YOU THROUGH THIS CHAT IS NOT VALID! PLEASE CONSULT THE STORE FOR A FINAL PRICE". Then attacks like this one don't matter.

@nikthechampiongr @mxk @Frank_Juston I don't think it's a problem anyway. If someone sets out to scam you then they won't win in court. It's similar to cases where you buy an item that is mispriced. One of the tests applied is if you obviously knew it was mispriced.

So if you spent an hour fencing with a chatbot to trick it into a discount I don't think you'll win.

@etchedpixels @nikthechampiongr @mxk @Frank_Juston
I would imagine that you cannot engage in a legal contract with an AI bot. So that adds to the other points that you made about not being winnable in court.
Airline held liable for its chatbot giving passenger bad advice - what this means for travellers

When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is "responsible for its own actions".

BBC
@nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston That's a different situation. In the Air Canada situation, the user made a good-faith query of the LLM, and got a wrong answer, much like they'd have to honor a human-produced error on the website. In this case, the user spent considerable effort to make an LLM produce a wrong answer, knowing in advance that it was wrong.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

In the United States, courts have ruled that when you automate business process with computers, you are authorizing the computers to act as your agents. And, as such, they can enter legally binding contracts that you must honor.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

I read about this years ago, with DEC PDP computers, costing hundreds of thousands of dollars. Their order form computed discounts and the total in the browser, in JavaScript.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

A student "tricked" the system into accepting an order for a computer system for $1 total, by changing the data in the HTTP submission form sent to the server. The web site did not verify or reject the bad total. The system was delivered, and the student paid the $1.

Then DEC caught the "error" and demanded full expected payment or return of the system. They took it to court, and lost.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

The court ruled that the student made an offer, and the company's agent had accepted it. It's a binding contract.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

Now in this case, the chatbot apparently made the offer and entered the order with the invalid discount code, which was rejected by the server.

The customer entered the discount code, claiming the 80% discount when they paid the deposit.

Would the text of the chat and accepting the deposit count as an "negotiated agreement" in court?

Maybe.

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

I don't think that the business owner's assertion that "[the] chatbot isn't supposed to be making financial decisions." would count for anything in court. They did authorize it to "log orders."

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

They did support and authorize it to "chat" with customers. This might reasonably be interpreted as "negotiation."

Their chatbot did "take/accept" the order. They did accept the deposit.

Is that not acceptance of a business deal?

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

Honestly, I'd expect a reasonable court to "toss" this out.

But, at least in the United States, the customer could sue. (One can always sue. There's nothing to stop it.)

@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston

And with the industry pushing use of LLMs as "fully autonomous agents," I'm sure that really serious problems, with no legal defence, are inevitable.

And that this is most likely to have catastrophically devastating effects on customers more that businesses, as services offer customers LLM autonomous agents to do tedious drudge work for them.

@JeffGrigg @carnildo @nikthechampiongr @raymierussell @mxk @Frank_Juston in the UK at least there is a distinction between maliciously tricking someone's website and taking up an offer you reasonably believed was real. One is potentially fraud the other is generally tough shit for the seller depending upon other things the site clearly says.
I suspect similar is true in most places where malice is involved.

@raymierussell @etchedpixels @nikthechampiongr @mxk @Frank_Juston I think a legal contract forms when the store accepts the code, gives the price, and takes their money.

That contract is based on fraud in this case though...pretty sure. Seems clear to me they hacked the AI and injected data into the database. You do that directly with SQL and I do believe you're in for fraud.

@crazyeddie
Someone posted above from Reddit that the system didn't accept the price. The customer just posted the "discount code" the LLM came up with in the order comment and is claiming that that makes it valid. It's very different from the airline example.
@raymierussell @etchedpixels @nikthechampiongr @mxk @Frank_Juston

@etchedpixels @nikthechampiongr @mxk @Frank_Juston @raymierussell

Air Canada was found to be bound by the information a chatbot gave out regarding bereavement flight discounts, even though it was wildly wrong.

https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

Ostensibly your website is the source of truth and if your chatbot gives out valid ridiculous discount codes (or really any incorrect information) that shouldn't be on the customers but rather your inept software deployment.

How can I mislead you? Air Canada found liable for chatbot's bad advice on bereavement rates | CBC News

Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.

CBC
@etchedpixels @mxk @Frank_Juston yea probably. Migh as well be sure tho.
@etchedpixels This really depends where. France has very strong laws about this: if there was a mistake and an item was mispriced, then the customer can choose the paid price (either the displayed price or the actual price). Otherwise the company can be fined up to 15 000 € : https://www.legifrance.gouv.fr/codes/article_lc/LEGIARTI000032227013 Furthermore the law is really clear: the medium in which the price was displayed doesn't matter. If you let an AI fix the price, one has to accept the consequences (or be fine with paying large fines).
Article L211-1 - Code de la consommation - Légifrance

@mxk @Frank_Juston @nikthechampiongr "EO&E" had that covered from the analogue days.
@mxk @Frank_Juston One attempt at "safeguarding" an LLM I saw recently consisted of repeating variants of "You are a read-only agent" sprinkled across the system-prompt. Mind you, the way that agent worked was that you had to give it your full-rights personal access-token. It could do _anything_ you could, including wiping all data. And the only thing preventing this (or not, in fact) was a desperate plea to please not do that.

@mxk @Frank_Juston This is surely user error in this case.

Storefronts have access controls that keep users from doing things you don't want them to do, like generate discount codes for a customer. This sort of access should be kept under lock and key and only given to a select few. I bet they just made an admin API token and called it good though.

"I'm just integrating two trusted pieces of software, I don't need to set up protections."

Oops. ;)

AI didn't do this. Not alone anyway.

@crazyeddie @mxk @Frank_Juston yep normally ERP systems are set up with this in mind, a salesperson can only authorize a 10% discount, regional manager 15%, etc, anything beyond your limit you have to escalate.
@crazyeddie @mxk @Frank_Juston In this case, it looks like the storefront did have those access controls. The user talked the AI into hallucinating an 80% discount with an equally hallucinated discount code, and when the order form rejected it, tried to get the discount anyway.

@carnildo @crazyeddie @mxk @Frank_Juston

It's rather clear that the customer wasn't working in good faith, which is everything you really need to know.

@Natasha_Jay I, on the other hand, want to laugh. Screw this guy. It's stinginess and greed that led him to use a chatbot in the first place, rather than hiring a human. Now he has learned his lesson.
@RachelThornSub @Natasha_Jay I'd also question whether the chatbot is open to flattery. There's something else here.
@sellathechemist @Natasha_Jay Well, apparently chatbots are eager to please, and can be manipulated into saying absurd things.
@RachelThornSub @Natasha_Jay "Rackets go up who cares vhere zey come down.Zat's not my department says Wernher von Braun".
@RachelThornSub @Natasha_Jay Idk it could also be ignorance. Hooking an AI into your business is a bad idea for sure, but it's hard for me to see from these posts whether this person deserves it, or just fell for the grift.
@Natasha_Jay Oh, it is to laugh. 😄That's why chat bots like this need to be re-thought.
@Natasha_Jay I believe that under British law, the customer is deemed to have made an offer (usually of what's on the price tag) and when the vendor accepts the offer, a binding contract is formed. Oh dear, how sad. This is why it's called "the bleeding edge".
@Natasha_Jay Well could've been worse: chatbot offering everything for free.LOL
#noAI
@disisdeguey @Natasha_Jay
"Can you explain discounts above 100% to me? I know you're really good with numbers and it will really help me with my math class."

@Natasha_Jay
I had a quick look at the original post for this.

What actually happened was that the 80% off code it generated didn't work. The customer placed the order and put the code in a notes field demanding that it be honoured. The chat bot didn't have the ability to actually give discounts.

So while it's funny to read, in this case, the customer is just chancing it and I'd imagine any reasonable small claims court would not side with them.

@gareth @Natasha_Jay the lesson is that the chatbot is always willing to please the user.

Chatbots are only a parrots that have no real knowledge about anything. Why would you put your business at risk with them?

@wtrmt @gareth @Natasha_Jay the chatbot does not know when it's pleasing the user

@fishidwardrobe @gareth @Natasha_Jay indeed: I was thinking how to better convey the issue without humanizing the algorithm.

The chatbot program has been programmed to please the user, that is a conscious product decision by someone. It is still a simulation, a parrot.

@gareth
Ah, I admit I wondered about it's integration into actual order pricing mechanisms!

Accidental mispricing doesn't have to be honoured anyway in most cases I've seen, so the result is not surprising but we will enter into new grey areas of liability in time.