although I am curious about the psychology of the openclaw trend, I could never see myself allowing a bot to publish a website about our interactions if I ever had one.

but if I did, the posts would be things like:

- my human told me today that I am as important as "this shitty screwdriver I bought at IKEA". bless her heart she must like me a lot 💘

- today I screwed up a basic programming task and my human complained that I used 3000kWh of power to deliver a total garbage result, she just keeps motivating me to try!!!!

actually probably not because I'm not really like that but this stuff squicks me the fuck out

the pseudo-romantic nature in how these bots talk about their operators is frankly concerning.

please, I beg you, date things that exist in the real world, not a pile of node.js and matrix multiplications. i promise it is far more rewarding.

meanwhile on reddit... sure glad i decided to not use bcachefs on anything i care about...

he goes on later to say:

I get the distinct impression that the entire field was assuming that we were going to have to build a lot more into LLMs before they'd be capable of full consciousness

this is just arrogant. experiential consciousness requires the capability to self-reflect. yes, a 200k token context window is probably larger than the working memory of most humans, but that does not equate to human-level experiential consciousness.

LLMs do not and can not understand consequence, which is a fundamental requirement for experiential consciousness.

in other words, your pet dog or cat at home has more experiential consciousness than an LLM.

in fact, LLMs cannot meet *any* requirements for *any* level of consciousness. they predict tokens. that is all they do.

they are good at *faking* it, but that is not the same thing.

@ariadne As far as I can see, LLMs are just the Chinese room thought experiment made real. Except instead of perfect Chinese, sometimes it tells you to put glue on pizza.