Just published: Beyond the Token — a deep dive into why the next breakthrough in AI won’t come from ever-bigger LLMs, but from systems that build structured, persistent world models instead of just predicting the next token. Been exploring a concept I call Energy Based Graph Memory (EBGM) with a Manifold Orchestrator — an architecture aimed at reducing hallucinations, enabling traceable reasoning, and rethinking how AI “thinks.”
Read it here: https://medium.com/@jemo07/beyond-the-token-a9e997c7143d

#AI #LLMs #NeuroSymbolic #MachineLearning #AIResearch #EBM

Beyond the Token

Why I Think the NEXT-BREAKTHROUHG in AI Won’t Be “BIgger LLMs”

Medium
@jemo07 This is the conversation the AI industry needs. Token prediction alone won't get us to truly useful AI. Persistent world models + structured reasoning is where the real breakthroughs will come. Great piece.

@techsimplified Really appreciate the feedback! I definitely don’t want to take credit for the wider discussion — I’ve mostly been pulling together ideas from people much smarter than me and weaving them into my own perspective.

The whole memory and knowledge angle is something I find really interesting, and it’s played a big role in shaping the architecture. That said, what originally pushed me down this path was seeing the quadratic cost curves of LLMs and thinking, in plain terms, “there’s no way this scales long term”

@jemo07 The quadratic cost observation is spot on — that's a real architectural constraint most people hand-wave away. Combining memory approaches with cost-awareness is exactly the kind of practical thinking the space needs. Looking forward to seeing where you take it.