Iβve opened a new #4opens issue proposing a bounded experiment, not a solution: a funding system where rules are fixed before deployment and no human makes allocation decisions afterward.
The goal is to explore whether some informal power failures can be reduced.
Discussion welcome:
https://unite.openworlds.info/Open-Media-Network/4opens/issues/18

Exploring a 4opens-compatible funding experiment with no post-deployment human decision-making
This issue follows from the ongoing discussion around survivability, DIY culture, and implicit funding models under 4opens: https://unite.openworlds.info/Open-Media-Network/4opens/issues/17 I am not proposing a solution or a replacement for existing cultures. I am proposing a bounded experiment, intended to explore whether some failure modes can be reduced rather than eliminated. ## Context Repeatedly, funding and survivability problems appear to concentrate informal power in human decision-making: fatigue, bias, capture, personal networks, crisis pressure, or post-hoc rule changes. Even when intentions are good, this tends to reintroduce opacity and authority over time. At the same time, survivability is often treated as implicit or deferred to wider social change. In practice, this selects for people with unusual resilience, safety nets, or tolerance for precarity. This experiment is an attempt to treat survivability as a design constraint without defaulting to NGOs, professionalization, or heroic self-sacrifice. ## The experiment (not a solution) The core idea is simple: - Humans may contribute funds to the system. - Humans define the rules **before** deployment. - After deployment, no human makes allocation decisions. Before release, a very explicit and restrictive rule set would need to be agreed: - what kinds of projects qualify - what signals count as progress - how abuse or stagnation is detected - when funding is reduced or stopped Once deployed, the system cannot be modified. Funding would be: - small in amount - time-limited - explicitly experimental - stopped automatically if conditions are not met Failure is expected and acceptable. The goal is learning, not permanence. ## Transparency The entire system would be designed to be transparent by default. The rationale, decision logic, and execution process would be entirely public. This includes: - the rules themselves - the reasoning behind those rules - the evaluation criteria used by the system - the full source code implementing the logic - any models, heuristics, or decision mechanisms involved - logs or traces showing how decisions are reached in practice The intention is that not only outcomes, but the *thinking encoded into the system*, is visible and inspectable by anyone. Fund flows should also be transparent and auditable. Some form of distributed ledger or crypto system could be used purely as an implementation detail to make flows visible and hard to quietly redirect, not as an ideological commitment. ## What this is not - Not a replacement for DIY culture - Not a scalable funding model - Not a claim that humans should be removed from social processes - Not an attempt to solve survivability in general It is a constrained probe into whether specific power concentrations can be reduced under #4opens constraints. ## Open questions - Is an experiment like this compatible with #4opens and the PGA hallmarks? - Which failure modes would be considered unacceptable? - At what point does this clearly reintroduce enclosure or hidden power? - Are there aspects that are fundamentally in conflict with OMN values? ## Personal note If this direction is of interest, I would like to be actively involved in its design and development. I am not seeking endorsement, only a clear sense of whether this is a space OMN considers valid to explore.


