although I am curious about the psychology of the openclaw trend, I could never see myself allowing a bot to publish a website about our interactions if I ever had one.

but if I did, the posts would be things like:

- my human told me today that I am as important as "this shitty screwdriver I bought at IKEA". bless her heart she must like me a lot 💘

- today I screwed up a basic programming task and my human complained that I used 3000kWh of power to deliver a total garbage result, she just keeps motivating me to try!!!!

actually probably not because I'm not really like that but this stuff squicks me the fuck out

the pseudo-romantic nature in how these bots talk about their operators is frankly concerning.

please, I beg you, date things that exist in the real world, not a pile of node.js and matrix multiplications. i promise it is far more rewarding.

@ariadne One thing that has been consistently underscrutinized with regards to LLMs is the design choice to use a chatbox exclusively. By using a design grammar that has historically indicated that another human is on the other end and not displaying anything but the text coming out of the LLM, providers are tacitly (and sometimes not so tacitly) encouraging users to anthropomorphize a huge file full of floating point numbers. It causes users to over-estimate their capabilities and excuse their flaws to their own detriment and to the advantage of LLM Companies and I find it insidious.