Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem
https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/
Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem
https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/
"...What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails...."
🤔
LOL this is the problem with relying on AI tools, as well...
"...His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.
The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that...."
@ai6yr every time
This publication comes to mind:
https://how.complexsystems.fail
As does a Human Factors lecture I attended last century (ugh) on the amount of money spent on psychological research to make fighter plane cockpits human-goof-proof, ON TOP of the extended, intense, and repeated training pilots go through.
One of the points in the early 90's was cars were becoming too complex for mere untrained humans to cope with, with next to no thought about the human-tech interface required.
Classic among classics.
Also, there’s a 99pi about exactly this: https://99percentinvisible.org/episode/children-of-the-magenta-automation-paradox-pt-1/

On the evening of May 31, 2009, 216 passengers, three pilots, and nine flight attendants boarded an Airbus 330 in Rio de Janeiro. This flight, Air France 447, was headed across the Atlantic to Paris. The take-off was unremarkable. The plane reached a cruising altitude of 35,000 feet. The passengers read and watched movies and slept.
No kidding, I had apparently actually listened to those ones but forgotten, and now I've listened again AND need to pile those into my "slow the fuck down with AI in everything" references pile. "automation paradox" is being baked in to the whole stack right now. Particularly terrifying when I think of my previous medical systems roles, because we seem to have dropped any idea of regulating life-endangering tech, too.
@johannab @Niall @ai6yr
Yeah. In multiple spheres, I’m increasingly resigning myself to “people are going to have to learn it for themselves” mode — shifting focus from global prevention to local mitigation, away from trying to control others and toward protecting what’s already in my sphere of personal control.
In my software consulting days, I often found myself trying to get companies not to hit themselves in the head with a hammer, but often it turned out to be best to just go ahead and let them do that and then ask “How’d that work out for you?” It’s painful to see in advance the needless damage, the waste, but sometimes it’s the only thing that works.