Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/

#tesla #crash

Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

Raffi Krikorian, Mozilla’s CTO and the former head of Uber’s self-driving car division, totaled his Tesla Model X while using...

Electrek
**VERY glad the guy and his kids are okay, but it would have been something else if the Uber self-driving chief had been incinerated or killed by a self driving car. 🤔

"...What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails...."

🤔

LOL this is the problem with relying on AI tools, as well...

"...His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.

The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that...."

#AI

@ai6yr every time

This publication comes to mind:

https://how.complexsystems.fail

As does a Human Factors lecture I attended last century (ugh) on the amount of money spent on psychological research to make fighter plane cockpits human-goof-proof, ON TOP of the extended, intense, and repeated training pilots go through.

One of the points in the early 90's was cars were becoming too complex for mere untrained humans to cope with, with next to no thought about the human-tech interface required.

How Complex Systems Fail

@johannab @ai6yr it’s also where standards help and “innovation” breaks muscle memory and consistency. Cars have always had quirks and differences but increasingly their user interfaces are becoming so different between makes sometimes in small until it causes a crash ways
- I have two cars (a Volvo and a Kia) their interfaces do some things exactly opposite of each other (one you push up to control the windshields the other you push down) that’s minor

More major - their safety systems differ

@Rycaut @johannab I haven't driven a Tesla, but the brake/accelerator pedal in a Tesla is a prime example of this
@Rycaut @johannab "One Pedal Driving". COMPLETELY DIFFERENT THAN ANY OTHER CAR
@ai6yr @Rycaut @johannab Wait... What? One pedal? What???
@nazokiyoubinbou @Rycaut @johannab Yeah, so how "one pedal mode" works, is they go when you press the pedal, and if you let go of the pedal it stops the car. You don't hit a brake pedal. But, that trains people to NOT PRESS THE BRAKE PEDAL in other cars if they switch vehicles.

@ai6yr @nazokiyoubinbou @Rycaut @johannab

Jesus Christ. one pedal, no buttons, touchscreen for shifting and other controls, terrible door handles? I knew they were death traps but holy shamoley.

@coolcalmcollected @ai6yr @Rycaut @johannab Begs the question of how they are even legal doesn't it?