Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/

#tesla #crash

Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

Raffi Krikorian, Mozilla’s CTO and the former head of Uber’s self-driving car division, totaled his Tesla Model X while using...

Electrek
**VERY glad the guy and his kids are okay, but it would have been something else if the Uber self-driving chief had been incinerated or killed by a self driving car. 🤔

"...What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails...."

🤔

LOL this is the problem with relying on AI tools, as well...

"...His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.

The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that...."

#AI

@ai6yr every time

This publication comes to mind:

https://how.complexsystems.fail

As does a Human Factors lecture I attended last century (ugh) on the amount of money spent on psychological research to make fighter plane cockpits human-goof-proof, ON TOP of the extended, intense, and repeated training pilots go through.

One of the points in the early 90's was cars were becoming too complex for mere untrained humans to cope with, with next to no thought about the human-tech interface required.

How Complex Systems Fail

@johannab @ai6yr

Classic among classics.

Also, there’s a 99pi about exactly this: https://99percentinvisible.org/episode/children-of-the-magenta-automation-paradox-pt-1/

Children of the Magenta (Automation Paradox, pt. 1) - 99% Invisible

On the evening of May 31, 2009, 216 passengers, three pilots, and nine flight attendants boarded an Airbus 330 in Rio de Janeiro. This flight, Air France 447, was headed across the Atlantic to Paris. The take-off was unremarkable. The plane reached a cruising altitude of 35,000 feet. The passengers read and watched movies and slept.

99% Invisible
@inthehands @ai6yr oh, cool, thanks! I love 99pi, except for the fact that their back-catalogue is longer than I have years left to live, I suspect. I've listened on-and-off for over a decade and they had quite the archive when I started!

@johannab @ai6yr

I’ve listened to almost every episode by now, and I can recommend the experience.

@inthehands @johannab @ai6yr that two-parter was awesome. While the tech may have improved between then and now, no decent solution to the fundamental problem of paying attention/ being ready for failing automation has been proposed.

@Niall @inthehands @ai6yr

No kidding, I had apparently actually listened to those ones but forgotten, and now I've listened again AND need to pile those into my "slow the fuck down with AI in everything" references pile. "automation paradox" is being baked in to the whole stack right now. Particularly terrifying when I think of my previous medical systems roles, because we seem to have dropped any idea of regulating life-endangering tech, too.

@johannab @Niall @ai6yr
Yeah. In multiple spheres, I’m increasingly resigning myself to “people are going to have to learn it for themselves” mode — shifting focus from global prevention to local mitigation, away from trying to control others and toward protecting what’s already in my sphere of personal control.

In my software consulting days, I often found myself trying to get companies not to hit themselves in the head with a hammer, but often it turned out to be best to just go ahead and let them do that and then ask “How’d that work out for you?” It’s painful to see in advance the needless damage, the waste, but sometimes it’s the only thing that works.