I don't want to laugh at someone's real distress but this IS very funny ...
No railguards in this chatbot 🤷‍♂️
@Frank_Juston safeguarding AI is a myth. Fundamentally you can't prevent attacks like that against LLMs.

@mxk @Frank_Juston This is surely user error in this case.

Storefronts have access controls that keep users from doing things you don't want them to do, like generate discount codes for a customer. This sort of access should be kept under lock and key and only given to a select few. I bet they just made an admin API token and called it good though.

"I'm just integrating two trusted pieces of software, I don't need to set up protections."

Oops. ;)

AI didn't do this. Not alone anyway.

@crazyeddie @mxk @Frank_Juston yep normally ERP systems are set up with this in mind, a salesperson can only authorize a 10% discount, regional manager 15%, etc, anything beyond your limit you have to escalate.
@crazyeddie @mxk @Frank_Juston In this case, it looks like the storefront did have those access controls. The user talked the AI into hallucinating an 80% discount with an equally hallucinated discount code, and when the order form rejected it, tried to get the discount anyway.

@carnildo @crazyeddie @mxk @Frank_Juston

It's rather clear that the customer wasn't working in good faith, which is everything you really need to know.