"we are at the end of days. Our planet โ€” which is, as far as we know, the only planet anywhere in the universe to support intelligent life โ€” is literally burning as I write this, and it is burning because we are burning it. Building huge data centres to pursue the egocentric fantasies of kleptocrat pirates only accelerates that process"

#AGI
#ArtificialGeneralIntelligence
#ClimateEmergency

https://www.journeyman.cc/blog/posts-output/2026-05-11-AGI-Will-Not-Have-A-Lizard-Brain/

AGI Will Not Have a Lizard Brain

Rejoice, for even though we do, verily, walk through the valley of the shadow of death, not all the phantasms we see in the valley are as dark as they appear.

The Fool on the Hill
@simon_brooke I think, it's not so important if humans will survive. The question is when humans will recognise that #ants already took over world dominion, being more intelligent that all these AI-bros ever imagined ... and are. ๐Ÿ˜Ž
@NatureMC @simon_brooke hang on, it feels like you're slowly working up to the plot of H.G. Wells "Empire of Ants", and the film it inspired, "Phase IV".

@tealeg Unfortunately, I don't know the book, neither the film. But I know ants a little bit ๐Ÿคซ https://podcasts.apple.com/us/podcast/ep-07-when-gardeners-run-wild-part-1/id1630784381?i=1000606747381

@simon_brooke

Ep 07 When Gardeners Run Wild (part 1)

Podcast Episode ยท NatureMatchCuts ยท March 31, 2023 ยท 1h 11m

Apple Podcasts
@NatureMC @simon_brooke
Sorry to butt in but think you'll find it was the mice actually.
@rooftopjaxx I accept every animal future, even if the cockroaches will win! ๐Ÿ˜ @simon_brooke

@simon_brooke "They do not have a semantic layer; they do not have a model of what any sequence of tokens mean" is a strawman argument. It shows a fundamental misunderstanding of how LLMs work, and why they are able to produce such remarkable results.

LLM encode tokens into high-dimensional vector embeddings that reflect human meaning. eg, The embedding for "king" and "queen" are very close in vector space. Even more interesting, mathematical operations on these vectors are semantically meaningful, eg "queen" - "king" == "woman" - "man".

But the encoding of meaning goes even deeper in the "self-attention" mechanism of modern Transformer architectures. They add all of the context of the tokens of a session into a huge "KV matrix", which extends the base semantics of each token with the context of all the proceeding tokens. So the embeddings for "black king to C5" become associated with chess, etc.

LLMs are statistics designed to encode semantics. It's fundamental to how they work.

@zenkat Frankly, I think your response shows a fundamental misunderstanding of what semantic means. Yes, certainly, decompiling an LLM will show that, for example, 'king' does cluster close to 'queen' (and also to 'rook', and thus to 'raven'). But it can't explain *why* 'king' clusters close to 'queen', or explain the context in which the association with 'rook' is, and is not, relevant.

@simon_brooke Heh. Having spent the better part of a decade working on the Semantic Web (Metaweb's Freebase) and building what is quite literally the biggest and most comprehensive encoding of semantic knowledge in the world (Google's Knowledge Graph), I'm pretty confident I have some idea of what "semantic" means.

But sure, let's debate the semantics of "semantic", as a LISPer I'm sure you'll find it fun as well. ๐Ÿ˜ธ

If I'm understanding your argument, you're saying that since LLMs lack any referent to the real world, they don't have true semantics, at last in the classical philosophical sense of the term. They can encode knowledge, but they can't "know" what it means because they don't experience the world, and so can't map those terms to real-world objects.

If that's the case, sure ... no argument from me. I don't believe LLMs are conscious entities. They have no lived experience, and so by definition can't map embeddings to the "real world". They are simply statistical encodings of human knowledge.

@simon_brooke But from a pragmatic and/or functional perspective, does it matter? We call a knowledge graph a "semantic layer", but it's also just a computer program without lived experience of the world (and a much simpler one at that).

And modern multi-modal LLMs encode so much more than just textual information. They're co-trained with all the audio, video, and image information available out on the interwebs. So their embeddings don't just encode the association between "black" and "queen" and "chess", they also encode the visual representation of those pieces across a multitude of chess sets. They "know" what a black queen looks like, how it's shaped, the colors it may be. (This is how NanoBanana and other image plagiarizers are able to work).

You can feed a picture into an LLM, and the LLM can trivially describe it. You can feed a piece of music in, and it will describe the key, timbre, and emotional tone. If that's not "semantics" (ie, having a referent to the real world), what is?