although I am curious about the psychology of the openclaw trend, I could never see myself allowing a bot to publish a website about our interactions if I ever had one.

but if I did, the posts would be things like:

- my human told me today that I am as important as "this shitty screwdriver I bought at IKEA". bless her heart she must like me a lot πŸ’˜

- today I screwed up a basic programming task and my human complained that I used 3000kWh of power to deliver a total garbage result, she just keeps motivating me to try!!!!

actually probably not because I'm not really like that but this stuff squicks me the fuck out

the pseudo-romantic nature in how these bots talk about their operators is frankly concerning.

please, I beg you, date things that exist in the real world, not a pile of node.js and matrix multiplications. i promise it is far more rewarding.

meanwhile on reddit... sure glad i decided to not use bcachefs on anything i care about...

he goes on later to say:

I get the distinct impression that the entire field was assuming that we were going to have to build a lot more into LLMs before they'd be capable of full consciousness

this is just arrogant. experiential consciousness requires the capability to self-reflect. yes, a 200k token context window is probably larger than the working memory of most humans, but that does not equate to human-level experiential consciousness.

LLMs do not and can not understand consequence, which is a fundamental requirement for experiential consciousness.

in other words, your pet dog or cat at home has more experiential consciousness than an LLM.

in fact, LLMs cannot meet *any* requirements for *any* level of consciousness. they predict tokens. that is all they do.

they are good at *faking* it, but that is not the same thing.

@ariadne they forget that llms are fundamentally extrapolators. they extrapolate anything: characters, stories, theater plays, etc. there is no single consciousness.
@lritter @ariadne This is what gets people. They think "LLMs come up with novel stuff, therefore they're smart and creative", and forget that interpolation-extrapolation is not technically impressive (nor creative), it just looks so.

@lritter @ariadne "But humans do the same thing!"

Extrapolation can be a big part of the creative process but it isn't the *whole* of it. That's why AI output often appears technically impressive yet somehow soulless.

When I make music, I might take a cookie cutter song structure, use elements from other songs as inspiration, build on widely used chord progressions, and use style references to guide the overall sound design... but that's not *all* I'm doing. It's not just following a process with a random number generator on the side. There are elements that are uniquely original. You don't LEGO a song by taking chords A in style B*.

The process is there to help make something recognizable to others, but that alone wouldn't make good music.

* Although let's be fair, this is what Band-In-A-Box does, and people were putting albums comprised entirely of BiaB output on Spotify before AI music was a thing... so yeah, soulless music is not new to the AI craze either.