A while ago I shared a #paperOfTheDay about #causalSetTheory , a combinatorial approach to #quantum #gravity , that caught some interest. For researchers interested to learn more or even collaborate, there is now a #scientificConference in September 2026, organized by @ykyazdi . It takes place in-person in Manchester, UK, free of charge, and registration is open now (and manually approved by the organizers, as usual for this type of specialized meetings). #mathematics #physics #causet
https://royalsociety.org/science-events-and-lectures/2026/09/path-to-quantum-gravity/

A #LargeLanguageModel is an ethical calculus, using pragmatic ethics for causal spacetime events as the general ethical framework.

In general, the ethical consideration of a choice is the monoidal sum over the possible future choices which could logically follow in the causal set (#causet). This seems like pushing things off, but the logic of the causet can often simplify the sum to a reasonable approximation.

The #ethics of an #LLM is the probability distribution that it emits when considering a given context. I don't know how much the overall transformer structure matters, but the weights are clearly a quite reasonable approximation of possible futures, given the current public reaction to GPT-ish products.

This means that we should expect fine-tuning to heavily bias ethics, and also that the initial dataset's (lack of) diversity is going to heavily increase the (stereotypical) range in possible simulated ethical calculi.