There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:

>>

https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/

NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law

The Microsoft-powered bot says bosses can take worker’s tips and that landlords can discriminate based on source of income. That's not right.

THE CITY - NYC News

It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.

Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)

>>

@emilymbender @fps_gbg IMHO the final issue with any issue is that if the people in charge are able to
see issues in things they and can adjust. If they want this and don't care of the repercussions any logical argument is lost on them, the only move is to
replace them. There are some many people with authority that ignore scientific consensus for ideological reasons that is has become really dangerous
in for us.
#politics #politicsandtechnology