https://royalsociety.org/science-events-and-lectures/2026/09/path-to-quantum-gravity/
A #LargeLanguageModel is an ethical calculus, using pragmatic ethics for causal spacetime events as the general ethical framework.
In general, the ethical consideration of a choice is the monoidal sum over the possible future choices which could logically follow in the causal set (#causet). This seems like pushing things off, but the logic of the causet can often simplify the sum to a reasonable approximation.
The #ethics of an #LLM is the probability distribution that it emits when considering a given context. I don't know how much the overall transformer structure matters, but the weights are clearly a quite reasonable approximation of possible futures, given the current public reaction to GPT-ish products.
This means that we should expect fine-tuning to heavily bias ethics, and also that the initial dataset's (lack of) diversity is going to heavily increase the (stereotypical) range in possible simulated ethical calculi.