Apple did the research; LLMs cannot do formal reasoning. Results change by as much as 10% if something as basic as the names change.
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
Apple did the research; LLMs cannot do formal reasoning. Results change by as much as 10% if something as basic as the names change.
https://garymarcus.substack.com/p/llms-dont-do-formal-reasoning-and
@anderspuck because they're expected to solve complex tasks, they're being sold as if they can solve complex tasks, and that they have a fail and error rate enough that they're not safe
They want these things to drive cars and make decisions that involve human lives.
@kasperd @anderspuck @ShadowJonathan
Dittos! I was about to post the same thing.
The industry has been developing "AI" technologies since before I was born. Many work quite well, and are useful. Some save money. Some save lives.
You probably interact with "traditional" AI systems far more often than you realize.
Each has to be evaluated based on its costs and benefits and risks.
Generative AI / LLMs Chatbots are a dangerous wasteful SCAM.
Self-driving cars are still "iffy."
Self-driving cars are iffy, but human driven cars are dangerous. A self driving car might already be safer than one driven by a human.
The hard question is what will people choose if they are given the choice between two accidents that can be blamed on human drivers or one accident with a self-driving car where there isn't anyone to blame.