Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"

https://lemmy.world/post/43503268

Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?" - Lemmy.World

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there. Also includes outtakes on the ‘reasoning’ models.

We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.
LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.
I’m sure AI will do those things at some point. Nobody expected the same of our microorganism ancestors.
Our microorganism ancestors also did all those things, and they were far beyond anything an LLM can do. Turning a given list of words into numbers, doing a string of math to those numbers, and turning the resulting numbers back into words is not consciousness or wisdom and never will be.

You think microorganisms can reason? Wow, AI haters are grasping for straws.

Honestly, I don’t understand Lemmy scoffing at AI and thinking the current iteration is all it ever will be. I’m sure no one thought that the automobile technology would go anywhere simply because the first model was running at 3mph. These things always takes time.

To be clear, I’m not endorsing AI, but I think there is a huge potential in years to come, for better or worse. And it is especially important to never underestimate something, especially by AI haters, because of what destructive potential AI has.

The straw I’m grasping at in this example is a reasonably well-accepted scientific consensus, but you do you.
Microbial intelligence - Wikipedia

Can you explain how quorom sensing is reasoning?