now that i am... writing my own agentic LLM framework thing... because if you're going to have a shitposting IRC bot you may as well go completely overkill, i have Opinions on the state of the world.

openclaw, especially, seems to be hot garbage, actually, because i was able to teach my LLM (which i trained from scratch on the highest quality artisanal IRC logs, 2003 to present, so i can assure you it is not a very good LLM) to use tools in the context of my own framework quite easily.

first of all, when i began i was quite skeptical on commercial AI.

this exercise has only made me more skeptical, for a few reasons:

first: you actually can hit the "good enough" point for text prediction with very little data. 80GB of low-quality (but ethically sourced from $HOME/logs) training data yielded a bot that can compose english and french prose reasonably well. if i additionally trained it on a creative commons licensed source like a wikipedia dump, it would probably be *way* more than enough. i don't have the compute power to do that though.

second: reasoning models seem to largely be "mixture of experts" which are just more LLMs bolted on to each other. there's some cool consensus stuff going on, but that's all there is. this could possibly be considered a form of "thinking" in the framing of minsky's society of mind, but i don't think there is enough here that i would want to invest in companies doing this long term.

third: from my own experiences teaching my LLM how to use tools, i can tell you that claude code and openai codex are just chatbots with a really well-written system prompt backed by a "mixture of experts" model. it is like that one scene where neo unlocks god mode in the matrix, i see how all this bullshit works now. (there is still a lot i do not know about the specifics, but i'm a person who works on the fuzzy side of things so it does not matter).

fourth: i built my own LLM with a threadripper, some IRC logs gathered from various hard drives, a $10k GPU, a look at the qwen3 training scripts (i have Opinions on py3-transformers) and few days of training. it is pretty capable of generating plausible text. what is the big intellectual property asset that OpenAI has that the little guys can't duplicate? if i can do it in my condo, a startup can certainly compete with OpenAI.

given these things, I really just don't understand how it is justifiable for all of this AI stuff to be some double-digit % of global GDP.

if anything, i just have stronger conviction in that now.

@ariadne Having studied up a bit myself I can fill in a few pieces. Reasoning models just have been trained to chatter on in some kind of preamble that is intended to be hidden or de-emphasized in the UI, possibly wrapped in tags like <reasoning>blah blah blah</reasoning>, followed by a shorter answer. Mixture of experts is an orthogonal idea to structure the models so predictions can be run using only a in order to use less compute. Both ideas make models hard to train for different reasons.
@mirth sure, but the "thinking" ones do some consensus stuff to ensure it doesn't go off course
@ariadne Not at prediction time, they do another stage of training that works a bit differently but the resulting model is structurally identical to the input model. I think you're very right about the lack of defensibility though, if you wanted to catch up with the leading labs in a year or two you could probably do it with around $200M and the charisma to recruit the people who know how to do this stuff.
@ariadne I should say by "catch up" I mean to get to parity, my impression is the model research is kind of like drug development where a lot of the cost is paying for all the experiments that don't work, as a result it's much easier to catch up than to get out "ahead" whatever that means. Setting aside the ethical issues, the functional issue of how to effectively use plausible-sounding crap generators as part of reliable software systems remains unsolved.
@mirth the question is why compete with them at all? it has same energy as the unix wars. large, proprietary models that lock people in. I would rather see a world of small, modular libre models that anyone with a weekend and a GPU can reproduce.

@ariadne To me it's a question of sufficient output quality, the strongest models available just barely function enough to do a little bit of general purpose instructed information processing unreliably. That will improve over time but the current stuff is very early.

The reason I'm a bit skeptical of a proliferation of weekend-sized models is that that size sacrifices the key ingredient enabling the whole LLM craze: the magical-looking ability to run plain language instructions.

@mirth i mean, i don't think that necessarily holds *if* you have the ability to build whatever you need with legos.

in many cases simply translating natural language to a specification for an expert system is enough

@ariadne
Yeah, one thing I've wondered is how much simpler a system that, instead of processing code, took the plain english "refactor this to blah blah" and just processed the language and figured out what to tell the IDE and etc for everything else, could be.

Run a calculator instead of being one - and you have a much simpler problem to solve.

Could the reliability and ethical problems all be solved -- maybe, i dunno, but - yet another case of "tech could be cool if the harmful parts go away..."

@mirth

@pixx @mirth i think small LLMs do not really have an ethical problem: i trained a 1.3B parameter LLM off of my own personal data in my apartment by simply being patient enough to wait. no copyright violations, no boiling oceans, just patience and a professional workstation GPU with 96GB RAM.

the ethical problem is with the Big AI companies who feel that the only path forward is to make bigger and bigger and bigger monolithic prediction models rather than properly engineer the damn thing.

that same ethical problem is driving the hoarding, because companies are buying the hardware to prevent their competitors from having it IMO.

@ariadne
Mostly agree, but mosy purposes for automated text generation that I've seen are either toys or evil
@mirth
@pixx @mirth yes, i agree that the main usecase for automated text generation is antisocial stuff like spam. what i am pursuing is more "language as I/O" than text generation. think Siri.
@ariadne @pixx @mirth Writing boring boilerplate code and writing machine-checkable proofs are two things I think LLMs might be useful for. Formal proofs in particular are so verbose that they take a huge amount of time for humans to write them by hand.
@pixx @mirth @alwayscurious in concerned about the copyrightability of the code generated by LLMs
@ariadne @pixx @mirth Copyrightability or legality? It not being copyrightable isn’t a problem. Are you concerned that it is infringing?
@pixx @mirth @alwayscurious I am concerned about both, but case law so far shows that users using the model are probably fine, while commercial AI operators are liable for operating in bad faith.

@pixx @mirth @alwayscurious and it being not copyrightable is a problem because not all jurisdictions have "public domain".

and "public domain" is also a risk to OSS licensing.

@ariadne @pixx @mirth I think it is ineligible for copyright protection, which is equivalent to a maximally permissive license that allows anything.
@ariadne @pixx @mirth @alwayscurious every jurisdiction I'm aware of has public domain, but not every jurisdiction makes it possible to transfer your work into it. For example, Germany does not allow copyright transfer at all (I technically am the copyright holder of all code I write for work, through in practice the rights are delegated to them)