A peek into the MEMORY.md wrt my Claw #LLM fiddling about. I think AGI has been achieved or, more likely, I am extremely anodyne because yeah this tracks
it still weirds me out that people are stampeding over eachother to buy *Mac minis* just to run this. Provided you're doing the typical thing and not trying to back it with a local #LLM it's just ridic. Even at full-fat OpenClaw that is such overkill. Completely unnecessary.
@adr Saw a post that mac minis get pretty slow when running sub-agents or any kind of parallel prompts. I need to look into that claim more, but it does track. It sounds like they're not even great at what people are buying them for.
@dvshkn oh, maybe that's where I'm getting confused, because I haven't even *tried* to do anything parallel or sub, really.
@adr I guess a lot of people buying mac minis aren't doing it to run local models, so maybe they'll never run into it. AFAIK the latest version of claude code can do some sub-agent stuff, but I haven't gotten around to playing with it myself. It sounds like it is getting more accessible though.
@dvshkn I've wanted to mess with Claude Code on my locals but it *times out* way too frequently, and there's no way to really extend the timeout. period (there's talk about it, but none of the purported solutions work, as far as I can tell). Opencode works fine, so I generally stick with that.
@adr Yeah, I like opencode for local stuff, too. I haven't really tried using claude code with non-claude models.

@adr I have returned to retract some of my mac mini slander. Apparently people are streaming MoE weights from SSD, which is pretty rad:

https://xcancel.com/simonw/status/2036294026438254783

Tbf, I don't know if this is mac-specific.