@davidgerard @pluralistic @timnitGebru #Podcast time while I'm drilling on computer security models. 1h 16m in: After publicly rebutting Dario #Amodei this week, software design legend @gradybooch called it in Dec 2024 already; he refers to #LLMs as #Stochastic #Parrots as per @emilymbender terminology, mentions Gary #Marcus on #neurosymbolic approaches, calls out Sam #Altman on the #AGI fantasy, and describes how Yann #Lecun ended up blocking him on X. https://newsletter.pragmaticengineer.com/p/software-architecture-with-grady-booch
Software architecture with Grady Booch

Today, I’m thrilled to be joined by Grady Booch, a true legend in software development. Grady is the Chief Scientist for Software Engineering at IBM, where he leads groundbreaking research in embodied cognition.

The Pragmatic Engineer

Seems like #GaryMarcus is changing his tone: https://garymarcus.substack.com/p/the-biggest-advance-in-ai-since-the

Well, I disagree with his take on #ClaudeCode being #neurosymbolic and that it "changes everything". An if clause in the #LLM orchestration or harness does not make a neurosymbolic #AI

Zooming out a bit though, I think he starts seeing coding assistants in a more positive light.

The biggest advance in AI since the LLM

Why Claude Code changes everything

Marcus on AI
#ClaudeCode isn’t better because of #scaling. It’s better because it is #neurosymbolic. #Anthropic accepted the importance of using #classical #AI #techniques alongside #neuralnetworks — precisely the marriage I have spent my career advocating.” - #GaryMarcus open.substack.com/pub/garymarc...

Das Beste aus zwei Welten: Während Deep Learning (neuronal) exzellent in der Wahrnehmung ist, bringt die klassische KI (symbolisch) logisches Denken und Regelkonformität mit. 🧠💻 Die Neuro-symbolische KI vereint diese Ansätze, um robustere und vor allem erklärbare Modelle zu schaffen.

Erfahre mehr in meinem neuen Beitrag auf @BASICthinking
https://www.basicthinking.de/blog/2026/04/06/neuro-symbolische-ki/

#KI #NeuroSymbolic #DeepLearning #Informatik #Innovation #ExplainableAI #XAI #TechNews

Neuro-symbolische KI senkt Energiebedarf beim Training drastisch

Ein Forscherteam hat neuronale Netze mit Logik-Regeln kombiniert. Ergebnis: Bis zu 99 Prozent weniger Energie beim KI-Training.

BASIC thinking

Argh I was missing it #LIVE #LIVENOW #ONTOLOGY https://www.youtube.com/watch?v=OxXvUO2g1QA talk *just* starting at the 2026 summit #KR

Unsatisfactory #neurosymbolic #AI

Ontology Summit 2026

YouTube

"#AI excels at learning through association, but fails when a problem requires a form of symbolic reasoning that cannot be implicitly learned from the correlation between game states and outcomes - a tangible, catastrophic failure mode"

Another example why #neurosymbolic reasoning is necessary for #AGI
https://arstechnica.com/ai/2026/03/figuring-out-why-ais-get-flummoxed-by-some-games/

Figuring out why AIs get flummoxed by some games

When winning depends on intuiting a mathematical function, AIs come up short.

Ars Technica

Just published: Beyond the Token — a deep dive into why the next breakthrough in AI won’t come from ever-bigger LLMs, but from systems that build structured, persistent world models instead of just predicting the next token. Been exploring a concept I call Energy Based Graph Memory (EBGM) with a Manifold Orchestrator — an architecture aimed at reducing hallucinations, enabling traceable reasoning, and rethinking how AI “thinks.”
Read it here: https://medium.com/@jemo07/beyond-the-token-a9e997c7143d

#AI #LLMs #NeuroSymbolic #MachineLearning #AIResearch #EBM

Beyond the Token

Why I Think the NEXT-BREAKTHROUHG in AI Won’t Be “BIgger LLMs”

Medium
Et si le vocabulaire que tu utilises tous les jours n’était pas neutre ?
Quand tu dis « appeler une fonction », « appeler une API », « appeler un service », qu’est-ce que tu acceptes implicitement comme architecture ?
Un monde de processus qui tournent en permanence, de sockets ouverts, de files d’attente, de watchers qui surveillent, de latences à gérer ?
Et si, rien qu’en changeant un mot, tu pouvais sortir de cette prison invisible ?
#SymbolicAI
#Neurosymbolic
#AIArchitecture

LOGOS-κ: Протокол исполняемой семантики и динамических графов знаний

Современные инструменты работы со знаниями (OWL, RDF) статичны: они описывают состояние мира, но не процессы его изменения. С другой стороны, LLM (Large Language Models) генерируют контент динамически, но часто страдают от галлюцинаций и отсутствия структурной памяти...

#logosκ #семантика #logos #хранениеданных #интероперабельность #semanticdb #python #semanticcomputing #neurosymbolic #ai

Источник: https://dstglobal.ru/club/1143-logos-protokol-ispolnjaemoi-semantiki-i-dinamicheskih-grafov-znanii