There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
>>
https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/
There's a lot that's alarming in this article, but perhaps the most alarming part is the NYC spokesperson assering that the problem can be fixed via upgrades:
>>
https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/
It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn't a fixable bug. It's a fundamental mismatch between tech and task.
Also, it's worth noting that RAG (retrieval augmented generation) doesn't fix the problem. See those nice links into NYC web pages? Not stopping the system from *making shit up*. (Second column is chatbot response, third is journalist's report on the actual facts.)
>>
❗ NEXT!!A Canadian court just found Air Canada liable for its chatbot's lies, which makes perfect sense. They're just publishing it on their website so are responsible for its content.
There's no "AI exception" in any laws, anywhere, that I know of.
@emilymbender It's the end result of decades of writing code with an eye to "beating the turing test", where "beating the turing test" explicitly requires fooling the judges. It's not a surprise that they have gotten really good at writing code to fool the judges.
When you're writing code to fool people, fooling people isn't a bug, it's a design feature.
@emilymbender As I'm sure you know already, it's worth reminding everyone that LLMs are ALWAYS making stuff up - ALWAYS.
It's just that very often most people can't tell, and somehow politicians and captains of industry really, really, really want to believe that they can replace humans with AI.
They can't - and until it hits them in the wallet they won't care.
@emilymbender What is "count tips toward minimum wage requirements" if not "take a cut of your worker's tips"?
Perhaps the real fault of the bot is that it hasn't been exposed to years of corporate propaganda teaching it that the current practice is benign and totally unobjectionable?
Min wage in NYC is $15 an hour. If employees are tipped, employers can pay them $10.35 an hour and assume that the tips make up the deficit. So the employer can make $4.65 an hour in saved wages, which is taking effectively a 'cut' of the tips, yes.
However,taking a 'cut of the tips' more typically means something like 'Employer gets X% of tips'. So if an employee makes $100 in tips during a dinner hour, the employer cannot legally take, say, 10% ($10) of it.
Also, if an emplyee does not make minimum wage from tips, the employer must make it up. So if an employee only makes, say, $3.00 in tips, not only is the employer not entitled to 10% of it, they must actually shell out an addition $1.65 to make up the employee's pay to the full minimum wage of $15 per hour.
(source: https://dol.ny.gov/minimum-wage-0)
And anyway, allowing the tips to count towards minimum wage is only for hospitality workers. Employers can't touch tips otherwise.
@ergative @emilymbender Which part of
> years of corporate propaganda
> teaching that the current practice
> is benign and totally unobjectionable
did you struggle with, perhaps I can help?
@emilymbender Faced with complains about this, the government agency says, “But thousands of people were helped!” Sure, but how many weren’t lied to?
I feel like this is yet another instance of our general tech+automation+capitalism problem of ignoring the heavy cost of false positives. The systems are at a scale that makes manual review expensive, and the companies have zero incentive to discuss it, because the customer isn’t likely to notice most of them.
Police departments love image recognition because companies claim they catch N% of crooks. They never talk about the percentage of innocent people caught.
Fraud finding algorithms found tons of fraud in postal offices in England. Nobody looked at how many times they caught the wrong people until it was far too late, and the company hid it when they found out rather than lose the contract. People went to prison, lost jobs, committed suicide. But hey, the system caught fraud.
Facebook lauds the number of accounts banned for hate speech but never publishes the number banned incorrectly. Like most companies selling “AI”, they don’t actually know that number, it requires expensive manual review they don’t want to do. When I was in that org I was told that every quarter they put it on the “ought to do” list, but it never made the cut. In large part, I suspect, because, because developers and teams are rewarded based on how they increase profit, and Facebook have no way to measure what they lose by banning the wrong person. Getting developers there to work on something that doesn’t result in promotion is hell.