Father and son incinerated after ‘self-driving’ Tesla suddenly slammed into tree
Father and son incinerated after ‘self-driving’ Tesla suddenly slammed into tree
I feel for the family’s loss: that’s a horrible way to go
…. But the article has a lot of inconsistencies that cast doubt
With names like autopilot and full self driving, there’s a reason people are overconfident in the cars abilities.
Any complication in emergency door releases is a critical failure and tremendous design flaw. Emergency features should be incredibly obvious and easy to use, because when you go to use it there’s a huge chance you’re disoriented or hurt. A system you need to look for as you burn may as well not exist.
The exterior handle design is just awful. There’s a reason other countries are making them illegal and it’s not because they’re a safe choice.
There’s a reason Tesla has the highest fatal accident rate in the US despite having some of the best crash test results. You survive the impact to die a slower more painful death.
Missed my point but obviously not wrong.
It doesn’t take being a highly trained professional to understand autopilot doesn’t fly the plane.
It doesn’t, but I wouldn’t be at all surprised to find out 1/4 people sitting in the plane don’t understand that. Most Tesla drivers understand that FSD and autopilot aren’t as robust as their titles imply, but you’ve gotta remember how stupid the average consumer is.
Anecdote: last night I was talking to a childhood friend while gaming and he just rented a new Tesla on a work trip and was talking about FSD. The overconfidence in the system after a few easy highway miles was palpable. The phrase “the future is now” was used. This is a high achieving, high earner in a stem career. A smart guy for sure, and he hadn’t taken the time to understand the system he was using; my expectations for a grocery store manager with a GED is lower.
I guess therein kinda lies the problem for me, the fact there’s a lot of idiots doesn’t mean we should prevent everyone from using that thing.
Its just hard to find the line between what is reasonable and babyproofing (degrading the possible quality of) a certain system/feature/tool.
I feel like AI/LLMs in particular are another example, I hear a lot of people saying they need to be blocked/banned because we have dumbasses falling in love with them, people using them to help them commit suicide or people having breaks with reality because they believe the word generation machines lol.
I just don’t think because some people don’t understand how to properly use a thing that it should be completely banned unless the thing is itself literally only intended for harm.
That doesn’t mean I don’t think there should be regulation or anything like that, it’s just I see too many people go way over what I’d think to be reasonable safeguards because idiots ruin it lol.