Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/

#tesla #crash

Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

Raffi Krikorian, Mozilla’s CTO and the former head of Uber’s self-driving car division, totaled his Tesla Model X while using...

Electrek
**VERY glad the guy and his kids are okay, but it would have been something else if the Uber self-driving chief had been incinerated or killed by a self driving car. 🤔

"...What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails...."

🤔

LOL this is the problem with relying on AI tools, as well...

"...His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.

The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that...."

#AI

@ai6yr I observed this effect first hand over 20 years ago supervising a dot-com era news project that was assisted by early AI technology.

It was early, but the #AI did some impressive things already. It easily outsmarted the humans who were hired to supervise it, but that was only because the humans had a lot to learn.

Once the humans caught up, the AI wasn't all that useful. What seemed like a superpower technology day one on the job would become a babysitting chore by 6 months.

Ultimately, people always preferred to do the work directly. It wasn't just less boring that way. It was also less frustrating.

What we experienced was worse than simple vigilance fatigue because it was boredom multiplied by the frustration of not having control.

If you make a mistake, you have an idea what happened and can choose to make changes. However terrible, you maintain locus of control.

When the black box screws up, there's nothing you can do. You're learning helplessness inside an absurd Kafka-Beckett collab.

@sysop408 @ai6yr Kafka would LAUGH if he was alive.