The Future of Everything is Lies, I Guess

I think the discussion has to be more nuanced than this. "LLMs still can't do X so it's an idiot" is a bad line of thought. LLMs with harnesses are clearly capable of engaging with logical problems that only need text. LLMs are not there yet with images, but we are improving with UI and access to tools like figma. LLMs are clearly unable to propose new, creative solutions for problems it has never seen before.

> LLMs with harnesses are clearly capable of engaging with logical problems that only need text.

To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.

Solving arbitrary logical problems seems to be equivalent to solving the halting problem so you are probably wise not to make that bet.