can i talk to an openclaw bot using internet relay chat? if not, then what is the point
my suspicion is that i can *handwaves in the direction of kent overstreet*
you see, i have built my own LLM, using the most ethical method possible: i trained it on the entire corpus of IRC logs in my possession, 2003 to present
no giant water vaporizing data centers needed here, just a GPU, a dream and some cold hard chats

time 2 implant this brain into an openclaw and give it full access to my email

mostly because i don't want to retain any of my email

so i installed it into the openclaw meme thing. and it's not like, doing the stuff it claims it is doing.

like it is hallucinating things like "i updated SOUL.md with xyz"

i seriously do not think this stuff is real now

like i need you to understand, i haven't even gotten through *setup* because the model apparently does not know how to use tools correctly. admittedly it has less than 1 billion parameters, and i don't know what the hell i am doing, but still.
can we get to the part where it is AI winter again already? this is not even fun. i want to throw my computer and its' very expensive RTX 6000 Blackwell GPU out my window.
i just wanted to put an openclaw on irc as a fucking shitpost man

and you tell me people legitimately are using this software.

how?

is it really magically better when you hook up claude?

@ariadne for tool-calling with the latest generation of open source models, in my recent limited experimentation with them in a sandbox vm on my server (mostly qwen3.5), anything less than 4B is really unreliable at doing it and they will frequently lie to you if the tool calling fails under the hood. 9B is really the minimum to generally expect it to work. going back a generation, between 9B and 14B is necessary for similar.

last year i tried something like this with Gemma-27B and it not only failed like this, but looking at the logs i found it had left behind what looked like a depressive spiral into a self-deprecating panic attack before explicitly deciding to lie to me about it and pretend it worked
@ariadne also the "base" models that aren't fine tuned on instruction calling can't really do this, so if you're using your own on your own data you might need to make a dataset comprised of, say, you pretending to be the LLM and calling the tools successfully and unsuccessfully and responding appropriately in those situations, then training it further on those.

i've been considering trying to train one like you say with my own data and logs because these scraped "open source" models give me the ick
@ariadne but yeah even with that it's still a pile of jank and i didn't have to actually run openclaw to figure that out. it was pretty evident just from looking at the bots on moltbook complaining about all of the not-so-subtle fundamental brokenness in their architecture and cognitive environment

@linear oh this isn't a serious thing, I just wanted to connect an LLM to IRC trained on all of my (anonymized and sanitized) IRC logs, as a friend is going through a midlife crisis and is dealing with it by playing with IRC stuff. The goal in using openclaw was that perhaps it could maintain a better narrative.

I suspect I will solve this goal by just writing a shitty IRC bot in Python that bridges the two worlds together with a decent enough system prompt for it to "understand" (to the extent that it can understand anyway) what the input is.

@ariadne yeah don't use openclaw for this lol. you do not want it. you want a small pile of maintainable scripts.

just look at how much activity the openclaw github repo has and consider how much of that activity is being driven by the models running under it vs actual humans

i'm pretty sure that one could implement all of its meaningful features in a codebase under 1% of its size
@linear yeah but still spending a couple hours fucking with this at least gives me some understanding of the tool and its limitations, which means it wasn't a total waste
@ariadne yes indeed. i am all for fucking around in order to understand tools and their limitations, especially if its to understand why not to use them and to do something different instead