Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

https://electrek.co/2026/03/17/former-uber-self-driving-chief-tesla-fsd-crash-supervision-problem/

#tesla #crash

Former Uber self-driving chief crashes his Tesla on FSD, exposes supervision problem

Raffi Krikorian, Mozilla’s CTO and the former head of Uber’s self-driving car division, totaled his Tesla Model X while using...

Electrek
**VERY glad the guy and his kids are okay, but it would have been something else if the Uber self-driving chief had been incinerated or killed by a self driving car. 🤔

"...What makes this account particularly striking is Krikorian’s background. At Uber’s Advanced Technologies Center, he ran the team building autonomous vehicles and trained human safety drivers on exactly when and how to intervene when a self-driving system fails...."

🤔

LOL this is the problem with relying on AI tools, as well...

"...His core argument: Tesla is asking humans to supervise a system that is specifically designed to make supervision feel pointless. As he puts it, an unreliable machine keeps you alert, and a perfect machine needs no oversight, but one that works almost perfectly creates a trap where drivers trust it just enough to stop paying attention.

The research backs this up. Psychologists call it the “vigilance decrement”, monitoring a nearly perfect system is boring, boredom leads to mind-wandering, and drivers need 5 to 8 seconds to mentally reengage after an automated system hands control back. But emergencies unfold faster than that...."

#AI

@ai6yr every time

This publication comes to mind:

https://how.complexsystems.fail

As does a Human Factors lecture I attended last century (ugh) on the amount of money spent on psychological research to make fighter plane cockpits human-goof-proof, ON TOP of the extended, intense, and repeated training pilots go through.

One of the points in the early 90's was cars were becoming too complex for mere untrained humans to cope with, with next to no thought about the human-tech interface required.

How Complex Systems Fail

@johannab @ai6yr it’s also where standards help and “innovation” breaks muscle memory and consistency. Cars have always had quirks and differences but increasingly their user interfaces are becoming so different between makes sometimes in small until it causes a crash ways
- I have two cars (a Volvo and a Kia) their interfaces do some things exactly opposite of each other (one you push up to control the windshields the other you push down) that’s minor

More major - their safety systems differ

@Rycaut @johannab I haven't driven a Tesla, but the brake/accelerator pedal in a Tesla is a prime example of this
@Rycaut @johannab "One Pedal Driving". COMPLETELY DIFFERENT THAN ANY OTHER CAR
@ai6yr @Rycaut @johannab Teslas (at least the ones I have seen and driven) do have a brake pedal, but you can set them to drive like a golf cart, where taking your foot completely off the "gas" pedal will bring it to a complete stop. Pressing the brake pedal will always engage the brakes AFAIK. I have a different brand EV and it can also do this (though I choose not to).

@kajord @ai6yr @Rycaut

A related problem in this "paradox of automation" discussion is that a lot of these techbros got in their imaginations that they should completely redesign the entire cockpit. They think they can "optimize" and "be more efficient" by moving or removing manual controls, since "self driving" means they're not needed.

IIRC, there has been more than one forensic investigation where people died in Teslas not in the crash, but because they couldn't get out of the fire.

@johannab @kajord @ai6yr and rescuers haven’t been able to open to doors from the outside to help people.

It’s horrible design - and deeply problematic to not design for the worst case situations and have that inform design choices. The bar to switch from very well established controls (mechanical door handles) should be exceptionally high. Flush handles arguably helps aerodynamics but it was an active choices to complicate doors

@Rycaut @kajord @ai6yr to give Tesla (not melon husk, the real actual designers and engineers) the teeeeniest benefit of the doubt, their oversights are a consequence car-culture capitalism.We have never in the slightest way considered the safety of people outside the car, or even occupants inside the car at any time a catastrophic deceleration was not happening.

Regulation that places the dollar not the human as the subject beneficiary is evil.