There's this common thing where people claim that not all #AI is bad. But we keep seeing cases where non generative AI is also bad. One of the reasons is because #LLMs have created an environment that pushes quality control to the victim and remove any accountability to involved actors.

Here is a Dutch article on 10% error rate in automatic parking tickets. And the victims are mostly people in a lower sociology-economic class.

- the card for people with special allowances to park their car closer to their destination is not detected and thus frequently fined
- humans around the car are ignored, so the context of quickly loading/unloading are often ignored
- the complaint-procedure takes long, requires some effort, and is badly documented.

https://www.nu.nl/binnenland/6391925/ruim-10-procent-van-parkeerboetes-door-scanautos-is-onterecht.html

Ruim 10 procent van parkeerboetes door scanauto's is onterecht

Het gebruik van scanauto's bij parkeercontroles zorgt jaarlijks voor ongeveer 500.000 onterechte boetes, blijkt uit onderzoek van de Autoriteit Persoonsgegevens (AP). Kwetsbare groepen zoals mindervaliden krijgen relatief vaak zo'n onterechte boete.

NU