Been mulling something... (No, this is not actually a sub-toot) Not sure I have it _quite_ articulated well yet, but getting close... Here is my current best attempt...
I think one thing that really misses the mark in culture efforts, inclusivity efforts, and things like codes-of-conduct for organizations & companies is trying to replicate the approach taken by government/state structures.
For example, using legalistic language to try and establish precision of wording in a CoC. Or structuring moderation rules or response policy as-if rules of law and governments. Or demand "adjudication" of moderation/CoC claims with an innocent unless proven guilty, shadow of a doubt, precise evidentiary rules, etc.
Fundamentally, the context here is critically _different_, and trying to apply the approach of one to the other is a mistake. In both directions.
Open source communities, even companies, are not sovereign states. They do not employ an armed police force or military to backstop their rules. If the state decides "you may not say that", they mean, "you may not say that and live as part of this state". And that determination is backed by the threat of violence. The state and the government _should_ be held to the highest possible standard. Judging someone guilty of a crime and enforcing it through state-backed violence of incarceration had _better_ be innocent until proven guilty, and proven with the highest standard of evidence, oversight, and rigor.
Getting banned from an open source community, or even being fired from a hot-shot tech job is _incredibly_ different. That's not to say that either of these is an inconsequential event -- they can be very consequential. And so folks I think feel motivated to push them to the higher standard. But we also need to be realistic, as these are not state-violence backed judgements. This is not the literal forced removal of your freedom or life. This is at _most_ the loss of an especially lucrative career that must be replaced with a categorically less lucrative career. And that's the worst case. Most moderation decisions are _hilariously_ less consequential. And it's entirely reasonable to use a less consequential process to arrive at them.