Father and son incinerated after ‘self-driving’ Tesla suddenly slammed into tree

https://lemmy.dbzer0.com/post/66633794

Father and son incinerated after ‘self-driving’ Tesla suddenly slammed into tree - Divisions by zero

Lemmy

I feel for the family’s loss: that’s a horrible way to go

…. But the article has a lot of inconsistencies that cast doubt

  • they’re in the uk and most of the article blames self-driving but that is not supported in the uk.
  • it gets to the end before switching to talking about autopilot, which is supported in the uk. Autopilot is adaptive cruise control plus lane keeping. I never understood how people seem to think this means self-driving: it is exactly analogous to autopilot in aircraft. Those have a range of functionality but are always under pilot command. I used to fly a small plane with single axis autopilot which basically just kept heading, much less capable than what you’d find on military or commercial aircraft, but there was never any confusion about capability
  • the article blames the emergency door release complexity which is true, but the description of hidden cable release depends on model year and which seat you’re in: they get partial credit for improving this over time. My 2023 model y front seats are very accessible
  • importantly the flush-mount door handles are not an adequate description of the problem. Firstly, the self presenting handles were only on the high end models: these are mechanically presenting so don’t fail that way. The root cause to focus on is the electronic latch. If your only option is an electronic latch and that fails in the crash, it doesn’t really matter what the handle/button is
  • fwiw the entire industry is aware of the possibility of current batteries igniting when sufficiently damaged and, including Tesla, has taken measures to prevent it. But there’s only so much you can do. The question is not whether current battery technology poses that risk: it does. The questions are whether that’s an outsized risk relative to other car technologies and whether Tesla could have done more. There have been several announcements of safer batteries but I don’t think they are available yet.

With names like autopilot and full self driving, there’s a reason people are overconfident in the cars abilities.

Any complication in emergency door releases is a critical failure and tremendous design flaw. Emergency features should be incredibly obvious and easy to use, because when you go to use it there’s a huge chance you’re disoriented or hurt. A system you need to look for as you burn may as well not exist.

The exterior handle design is just awful. There’s a reason other countries are making them illegal and it’s not because they’re a safe choice.

There’s a reason Tesla has the highest fatal accident rate in the US despite having some of the best crash test results. You survive the impact to die a slower more painful death.

The fact autopilot is called that in planes but somehow pilots know it doesn’t fly the plane for them completely autonomously…
Pilots are highly trained professionals, Tesla drivers are not.

Missed my point but obviously not wrong.

It doesn’t take being a highly trained professional to understand autopilot doesn’t fly the plane.

It doesn’t, but I wouldn’t be at all surprised to find out 1/4 people sitting in the plane don’t understand that. Most Tesla drivers understand that FSD and autopilot aren’t as robust as their titles imply, but you’ve gotta remember how stupid the average consumer is.

Anecdote: last night I was talking to a childhood friend while gaming and he just rented a new Tesla on a work trip and was talking about FSD. The overconfidence in the system after a few easy highway miles was palpable. The phrase “the future is now” was used. This is a high achieving, high earner in a stem career. A smart guy for sure, and he hadn’t taken the time to understand the system he was using; my expectations for a grocery store manager with a GED is lower.

I guess therein kinda lies the problem for me, the fact there’s a lot of idiots doesn’t mean we should prevent everyone from using that thing.

Its just hard to find the line between what is reasonable and babyproofing (degrading the possible quality of) a certain system/feature/tool.

I feel like AI/LLMs in particular are another example, I hear a lot of people saying they need to be blocked/banned because we have dumbasses falling in love with them, people using them to help them commit suicide or people having breaks with reality because they believe the word generation machines lol.

I just don’t think because some people don’t understand how to properly use a thing that it should be completely banned unless the thing is itself literally only intended for harm.

That doesn’t mean I don’t think there should be regulation or anything like that, it’s just I see too many people go way over what I’d think to be reasonable safeguards because idiots ruin it lol.