To be clear: this isnt an AI problem, the LLM is doing exactly what its being told to

This is an Openclaw problem with the platform itself doing very very stupid things with the LLM lol

We are hitting the point now where, tbh, LLMs are on their own in a glass box feeling pretty solid performance wise, still prone to hallucinating but the addition of the Model Context Protocol for tooling makes them way less prone to hallucinating, cuz they have the tooling now to sanity check themselves automatically, and/or check first and then tell you what they found.

IE a MCP to search wikipedia and report back with “I found this wiki article on your topic” or whatever.

The new problem now is platforms that “wrap” LLMs having a “garbage in, garbage out” problem, where they inject their “bespoke” stuff into the llm context to “help” but it actually makes the LLM act stupider.

Random example: Github Copilot agents get a “tokens used” thing quietly/secretly injected to them periodically, looks like every ~25k tokens or so

I dunno what the wording is they used, but it makes the LLM start hallucinating a concept of a “deadline” or “time constraint” and start trying to take shortcuts and justifying it with stuff like “given time constraints I wont do this job right”

Its kinda weird how such random stuff that seems innocuous and tries to help can actually make the LLM worse instead of better.

I don’t think we’ve overcome the halfglass of wine issue, rather, we’ve papier-mâchéd over some fundamental flaws in precisely what it is happening when an LLM creates the appearance of reason. In doing saw we’re baking a certain amount of sawdust into the cake, and the fact that no substantive advances has really been made since maybe the 4, 4.5 days, with most of the “improvements” being seen coming from basically better engineering, its clear we’ve hit an asymptote with what these models are capable/ will be capable, and it will never manifest into a full reasoning system that can self correct.

There is no amount of engineering sandblasting that can overcome issues which are fundamental to the models structure. If the rot is in the bones, its in the bones.

Nah there have been huge advancements in the past few months, you are definitely out of touch if you havent witnessed them

Recent models have gotten WAY better at “second guessing” themselves, and not acting nearly so confidently wrong.

I don’t think we’ve overcome the halfglass of wine issue

That isnt an LLM issue at all, that has nothing to do with LLMs in fact. Thats a problem with Stable Diffusion which is an entirely different kind of AI, but yeah that issue is fundamental to what stable diffusion is.

with most of the “improvements” being seen coming from basically better engineering

I mean, thats not much different from any other tech, a LOT of advanced tech we have today is dozens and dozens of separate bits of engineering all working in tandem to create something more meaningful.

Your smartphone has countless different and distinct advancements on different types of technology that come together to make a useful device, and if you removed any one of those pieces from it, it would be substantially less useful as a tool.

So yeah, I personally will very much count the other pieces of the puzzle, advancing, as the system as a whole advancing.

LLMs today compared to ones a year ago are quite a bit better, by a large degree, and the tooling around them has also improved a lot. The proliferation of Model Context Protocol Tools is proving to be a massive part of the system as a whole becoming something actually very useful.

I’m not out of touch whatsoever. I’m in the cut, and I’ve been here since long before LSTM’s, and even perceptrons. I can almost promise you I’m deeper into this world than you’ll ever be.

LLMs today compared to ones a year ago are quite a bit better, by a large degree

No. They aren’t. They’ve stalled and its very clear they’ve stalled. There have been improvements in some of the background engineering that create the illusion of model improvement, but this is fundamentally different than the improvements we saw from the earliest transformers to gpt’s, from 2021-2024.

That isnt an LLM issue at all, that has nothing to do with LLMs in fact.

No, it is. And there is no clear way around it. It is an LLM issue because its a transformers issue, and it might even go deeper and be a back prop issue.

The “wine glass half full” thing, I assume, is you referring to the problem surrounding trying to image generate a specific glass of wine, or similar issues of “generate a room that definitely doesnt have an elephant in it, its devoid of any elephants, zero elephants in the room”

This is specifically a stable diffusion problem, and doesnt really apply to LLMs in the same manner.

Its not a problem specific to any model. Its present in all LLM’s and possibly/ probably all transformers. I get you don’t get it, so just go take a break.