the AI alignment problem is entirely a smokescreen designed to distract from the capital class alignment problem
@glyph I do think there is an interesting perspective where computer software based on deterministic execution of instructions *can* be aligned with the goals of a user but computer software based on a trained statistical model cannot, technically, be aligned with anything at all as there is inherently random behavior. But we can't conceptualize that problem because the capital class is lying and saying that their computer has a soul because they named it "Cylde" and drew googly eyes on it
@mcc @glyph I don't think alignment has anything to do with determinism. People are non-deterministic but a person can absolutely be ethnically aligned (or not).
@stilescrisis @glyph I think a certain sort of predictability is a prerequisite for alignment. Necessary but not sufficient. Humans are not deterministic but their behavior can be consistent, because they can act with intent. They can have beliefs and moral codes. They can understand their own incentives and the consequences of their actions. You can do things that cause them to understand the consequences of their actions better.
@mcc @glyph Right, which is why they are called "model weights" and not "model coin flips." Models are non-deterministic at the token level but pretty darn consistent at the macro level, which is why ChatGPT articles are so easy to spot. "It's not X, it's Y"; numbered lists; boldface, etc.

@stilescrisis @glyph "Models are non-deterministic at the token level but pretty darn consistent at the macro level"

At recreating the structural properties of language, yeah, because that's what the algorithm's for. But the product is not sold as a "structural properties of text simulator". It is sold as an engine for producing meaning. And when it comes to meaning the tokens matter very much, very very much