There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
>>
https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/
There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
>>
https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/
It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.
Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)
>>
@emilymbender As I'm sure you know already, it's worth reminding everyone that LLMs are ALWAYS making stuff up - ALWAYS.
It's just that very often most people can't tell, and somehow politicians and captains of industry really, really, really want to believe that they can replace humans with AI.
They can't - and until it hits them in the wallet they won't care.