There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:

>>

https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/

NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law

The Microsoft-powered bot says bosses can take worker’s tips and that landlords can discriminate based on source of income. That's not right.

THE CITY - NYC News

It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.

Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)

>>

@emilymbender It's the end result of decades of writing code with an eye to "beating the turing test", where "beating the turing test" explicitly requires fooling the judges. It's not a surprise that they have gotten really good at writing code to fool the judges.

When you're writing code to fool people, fooling people isn't a bug, it's a design feature.