@CyberPunker @vnkr @mxk @Frank_Juston no, wait, I've got this!
So I have this rock that scares away tigers....
The computer science industry has spent decades protecting against fraud, the public sector has spent millennia protecting against fraud, and AI both completely fails to respond to any mechanism from these areas and is also so new nobody has even had time to think of anything even sightly robust.
This is a problem like trying to stop your employees from doing stupid things if the employees were also ephemeral and couldn't have any kind of consequences, easily tricked, stupid, lacking any human nature that might make them predictable, and lacking pretty much all of the senses a human has to ingest data, instead seeing everything in a previously entirely unknown abstract mess that both muddles everything together and discards half of the information.
You will not be able to make a reliable chatbot anytime soon.
@Malfunct Which won't happen under the current "AI" companies, because that puts them out of business.
There is no such thing as a safeguarded LLM. The people who claim to have one to sell you are lying.
@jmax well if they can't at least be as responsible as an employee for their actions they shouldn't exist. The legal framework needs to be built whether AI survives it or not.
I happen to agree with you about the fact the LLM can't be safeguarded, but the agent service they interact with to take actions can, it is just classic programming and can be secured in classic ways. An LLM tries to invent a discount code, it is denied like a unprivileged employee would.
@jmax @Malfunct @rlcw Pretty much. The buyer just managed to sound intimidating enough when threatening to go to court that the seller wasn't sure.
He spent a full hour chatting to the bot in what could only be a deliberate attempt to trick it into offering a "discount code" and then pasted it into the order comments demanding it be honoured.
Not sure if he's cooked enough to believe he has a leg to stand on, or if he was hoping he could bravado his way through if he sounded authoritative enough.
@Malfunct
Yes. Well said. This would be a BIG - enormous - change, but maybe it has to happen for #AI/#GenerativeAI/#LLM to become commercially viable.
@dalias @Malfunct @anselmschueler @vnkr @mxk @Frank_Juston
The Anakin Padme Star Wars meme:
Anakin says, "Unaccountable computers can't make management decisions."
Padme asks, "So human managers will be held accountable for their decisions?"
Anakin does not respond.
Padme, with a sad and confused face asks, "The humans will be held accountable, right?"
@nikthechampiongr @mxk @Frank_Juston I don't think it's a problem anyway. If someone sets out to scam you then they won't win in court. It's similar to cases where you buy an item that is mispriced. One of the tests applied is if you obviously knew it was mispriced.
So if you spent an hour fencing with a chatbot to trick it into a discount I don't think you'll win.
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
In the United States, courts have ruled that when you automate business process with computers, you are authorizing the computers to act as your agents. And, as such, they can enter legally binding contracts that you must honor.
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
I read about this years ago, with DEC PDP computers, costing hundreds of thousands of dollars. Their order form computed discounts and the total in the browser, in JavaScript.
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
A student "tricked" the system into accepting an order for a computer system for $1 total, by changing the data in the HTTP submission form sent to the server. The web site did not verify or reject the bad total. The system was delivered, and the student paid the $1.
Then DEC caught the "error" and demanded full expected payment or return of the system. They took it to court, and lost.
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
The court ruled that the student made an offer, and the company's agent had accepted it. It's a binding contract.
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
Now in this case, the chatbot apparently made the offer and entered the order with the invalid discount code, which was rejected by the server.
The customer entered the discount code, claiming the 80% discount when they paid the deposit.
Would the text of the chat and accepting the deposit count as an "negotiated agreement" in court?
Maybe.
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
I don't think that the business owner's assertion that "[the] chatbot isn't supposed to be making financial decisions." would count for anything in court. They did authorize it to "log orders."
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
They did support and authorize it to "chat" with customers. This might reasonably be interpreted as "negotiation."
Their chatbot did "take/accept" the order. They did accept the deposit.
Is that not acceptance of a business deal?
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
Honestly, I'd expect a reasonable court to "toss" this out.
But, at least in the United States, the customer could sue. (One can always sue. There's nothing to stop it.)
…
@carnildo @nikthechampiongr @raymierussell @etchedpixels @mxk @Frank_Juston
And with the industry pushing use of LLMs as "fully autonomous agents," I'm sure that really serious problems, with no legal defence, are inevitable.
And that this is most likely to have catastrophically devastating effects on customers more that businesses, as services offer customers LLM autonomous agents to do tedious drudge work for them.
✕
@raymierussell @etchedpixels @nikthechampiongr @mxk @Frank_Juston I think a legal contract forms when the store accepts the code, gives the price, and takes their money.
That contract is based on fraud in this case though...pretty sure. Seems clear to me they hacked the AI and injected data into the database. You do that directly with SQL and I do believe you're in for fraud.
@etchedpixels @nikthechampiongr @mxk @Frank_Juston @raymierussell
Air Canada was found to be bound by the information a chatbot gave out regarding bereavement flight discounts, even though it was wildly wrong.
https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
Ostensibly your website is the source of truth and if your chatbot gives out valid ridiculous discount codes (or really any incorrect information) that shouldn't be on the customers but rather your inept software deployment.
@mxk @Frank_Juston This is surely user error in this case.
Storefronts have access controls that keep users from doing things you don't want them to do, like generate discount codes for a customer. This sort of access should be kept under lock and key and only given to a select few. I bet they just made an admin API token and called it good though.
"I'm just integrating two trusted pieces of software, I don't need to set up protections."
Oops. ;)
AI didn't do this. Not alone anyway.
@carnildo @crazyeddie @mxk @Frank_Juston
It's rather clear that the customer wasn't working in good faith, which is everything you really need to know.
@Natasha_Jay
I had a quick look at the original post for this.
What actually happened was that the 80% off code it generated didn't work. The customer placed the order and put the code in a notes field demanding that it be honoured. The chat bot didn't have the ability to actually give discounts.
So while it's funny to read, in this case, the customer is just chancing it and I'd imagine any reasonable small claims court would not side with them.
@gareth @Natasha_Jay the lesson is that the chatbot is always willing to please the user.
Chatbots are only a parrots that have no real knowledge about anything. Why would you put your business at risk with them?
@fishidwardrobe @gareth @Natasha_Jay indeed: I was thinking how to better convey the issue without humanizing the algorithm.
The chatbot program has been programmed to please the user, that is a conscious product decision by someone. It is still a simulation, a parrot.
@gareth
Ah, I admit I wondered about it's integration into actual order pricing mechanisms!
Accidental mispricing doesn't have to be honoured anyway in most cases I've seen, so the result is not surprising but we will enter into new grey areas of liability in time.