So why are we seeing so many more disordered counterpublics now? The answer is a fundamental shift in the information environment. And it connects directly back to Lippmann.
Remember, Lippmann argued we rely on institutions to construct our pictures of reality. For most of the 20th century, that meant editors, broadcasters, publishers. They acted as gatekeepers, not just to information, but to who had a voice in public life.
That gatekeeping imposed some threshold of verification. But it also excluded people. And not just conspiracy theorists, it excluded legitimate voices too. Women, minorities, working-class communities. The counterpublics Fraser described existed precisely because of this exclusion.
And it's important to recognise that different populations experienced VDA very differently. If you were white, middle-class, and well-connected, institutions might verify your reality, include your voice, and hold power accountable on your behalf. Many others got none of that.
So the old system wasn't good, it was just stable. It maintained a shared picture of reality, but that picture reflected some people's experience far more than others. That exclusion created legitimate grievance long before social media existed.
That architecture has now been replaced. Social media platforms curate what most people see. Their logic isn't verification, it's engagement. Whatever holds attention gets amplified, regardless of truth or consequence.
This isn't a story about bad actors spreading disinformation. It's structural. Outrage and spectacle spread faster than rigour. Content that provokes emotion is amplified. Material that demands context or deliberation is buried.
This is why disordered counterpublics thrive. Functional counterpublics need time, rigour, and consequence. Platforms optimise for speed, volume, and virality. The architecture rewards the epistemic style of disorder.
The result: when people lose faith in institutions and look for alternatives, they're far more likely to encounter disordered counterpublics than functional ones. Disorder isn't an accident, it's the path of least resistance in this environment.
So the pseudo-environments Lippmann described haven't disappeared. The machinery that constructs them has changed, and the new machinery is optimised for engagement, not accuracy (however flawed that was). The pictures of reality people act on are now built by algorithms & communities, not editors.
So we've described how the information environment changed and why disordered counterpublics thrive in it. But there's a prior question: why do people leave institutional epistemics in the first place? Why do they stop trusting the system?
The answer starts with direct experience. When institutions fail you personally, when your doctor dismisses you, when your community is ignored, when no one is held accountable for things that damaged your life, you have rational reasons to look elsewhere.
This is VDA failure experienced at the personal level. Your reality wasn't verified. Your voice wasn't included in deliberation. No one was held accountable. The trust we described earlier, trust that institutions give you reliable pictures of reality, has been concretely betrayed.
And when that person searches for answers online, the collapsed information environment is waiting. A woman dismissed by her doctor finds alternative health communities. A worker whose community was destroyed by austerity measures finds someone offering explanations the mainstream never did.
These aren't stupid people making stupid choices. They're people whose trust was betrayed, looking for alternatives in an environment where disordered counterpublics are the path of least resistance. The grievance is often legitimate, but the destination doesn't have to be.
Michael Sandel, in The Tyranny of Merit, identified a specific and devastating form of this at scale: meritocratic humiliation. The system doesn't just fail people, it tells them their failure is deserved. If you didn't succeed, it's because you lack merit. Your resentment is just envy.
Worse, meritocratic ideology delegitimised the knowledge of those who hadn't succeeded by its criteria. To lack credentials wasn't just to be less expert, it was to be less entitled to an opinion. Democratic deliberation became an intrusion of feeling into the domain of knowledge.
This is epistemic humiliation: the arrangement of political culture so that large populations experience their democratic voice as both practically irrelevant and morally suspect. Experts know; you merely feel. Your feelings are noted, but they shouldn't govern policy.
When people withdraw from epistemic infrastructure that has humiliated them, that's not irrational. It's what rational actors do when institutions fail them. You can see this as "epistemic defection", and the direction of that defection is shaped by what's available when it happens.
This is the crucial point. Humiliation and trust betrayal determine that defection occurs. They don't determine where people land, the destination depends on the epistemic environment, and right now, that environment is structurally tilted toward disorder.
So people defect from institutional epistemics for rational reasons. But where do they land? Not randomly. They land in coherent epistemic systems, structured ways of processing evidence, handling disagreement, and imposing consequence. I call these moral-epistemic stacks.
Think of a stack as a layered architecture. Where does moral authority come from? How is certainty treated? What counts as evidence? How is disagreement handled? Where does accountability flow? How tightly is identity bound to the system?
Every epistemic system, democratic or not, can be described this way. These are generic types, not a fixed list. Real-world formations combine elements and shift over time. But the types reveal where each system's architecture breaks down.
A technocratic stack: moral authority sits with expertise. Evidence is rigorous but opaque. Deliberation is minimal, disagreement is reframed as ignorance. Accountability is metric-based. It fails when its authority outpaces public consent. Sound familiar?
A populist stack: moral authority comes from "the people." Evidence is what feels true. Disagreement is betrayal. Accountability targets elites and outsiders but protects in-group leaders. It fails through authoritarian drift as constraints on leadership erode.
A conspiratorial stack: moral authority lies in revealing hidden truth. Doubt is applied to everything except the group's own claims. Evidence is selectively decoded. Accountability means naming enemies. It fails because it becomes self-sealing, nothing can disprove it.
Notice: the stacks model doesn't say these people are stupid. It says each system fails at a specific architectural layer. Technocratic systems fail at deliberation. Populist systems fail at verification. Conspiratorial systems fail at epistemic posture.
And here's a crucial point: everyone has an individual stack, shaped by biography, experience, education, what institutions have done to you. We all carry layered commitments about how truth works and who deserves to be heard.
When your individual stack is compatible with a group's stack, you experience alignment. It feels like coming home. Not because you've been manipulated, but because the group's moral commitments genuinely resonate with yours.
The problem is what happens next. You're drawn in by genuine moral alignment, anti-war conviction, distrust of elites, concern about corruption. But over time, the group's verification logic and accountability direction can begin to reshape your own.
Challenging the group's conclusions starts to feel like betraying the values that drew you in. "You're saying the attack was real" becomes "you're siding with the people who lied about Iraq." A factual question becomes a loyalty test.
This is how sincere, intelligent people end up in formations that contradict their own values. Alignment is genuine at the moral source. Capture happens at the verification layer. The stacks model explains this without dismissing anyone as irrational.
Now, counterpublics. The direction a counterpublic takes depends on its epistemic architecture. Expansive counterpublics seek to widen shared reality, to bring excluded voices in. Their methodology is open to scrutiny.
Contractive counterpublics withdraw from shared reality and build enclosed epistemic spaces with their own internal validation. Questioning methods is treated as betrayal, not legitimate scrutiny.
The civil rights movement was expansive, it demanded a better shared reality, not a separate one. Conspiracy ecosystems are contractive, they reject the mainstream as captured and build closed worlds governed by internal loyalty.
Which direction isn't predetermined. It's shaped by structural conditions, how degraded the mainstream is, and whether people experience it as reformable or fundamentally hostile. As the mainstream degrades, more counterpublics move contractive.
And in the collapsed information environment, expansive counterpublics are structurally disadvantaged. They need time, rigour, and protected space. Platforms reward the opposite, conflict, tribal identity, speed. The environment selects for contraction.
The same digital transformation that enabled distributed democratic verification, open-source investigation, citizen journalism, also enabled distributed epistemic violence. Same tools, same environment, radically different architectures.
So if different epistemic systems have different architectures, the key question becomes: which systems are compatible with functional VDA? Which ones can coexist with real verification, genuine deliberation, and meaningful accountability?
Let's look at populism. Populism is not inherently anti-democratic. Populism is a family of moral-epistemic stacks, not a single pathology. And different populist configurations have radically different relationships with democratic infrastructure.
A degenerative populist stack: moral authority comes from betrayal and violated identity. Certainty is a virtue. Correction is treated as threat. Disagreement is betrayal. Accountability targets enemies but never the in-group's own leaders. Failure produces escalation, not learning.
This configuration is compatible only with simulated VDA. Real verification undermines its certainty. Genuine deliberation threatens identity cohesion. Meaningful accountability endangers its protected leaders. It needs democratic forms without democratic substance.
A regenerative populist stack looks different. Moral authority is grounded in harm caused by unaccountable power, not abstract identity. Certainty is assertive but corrigible. Evidence is expected to constrain claims. Correction weakens the claim but doesn't collapse belonging.
This configuration is compatible with functional VDA. It challenges unaccountable power while accepting that evidence should narrow and revise claims. Disagreement is bounded, not forbidden. Accountability flows upward and, crucially, inward too.
The difference isn't about tone or intensity of grievance. Both forms can be angry, confrontational, disruptive. The difference is structural: how does the system respond to correction, disagreement, and failure? Does it learn, or does it escalate?
And here's the opportunity. If functional VDA infrastructure is available at the right speed and visibility, it can support regenerative populism. It can constrain claims with evidence, channel disagreement into remedy, and resolve mobilisation through real accountability.
When functional VDA is absent or too slow, even regenerative movements get pushed toward frame hardening. Late-arriving evidence feels like narrative weakening. Delayed accountability feels like management, not justice. The window closes and the system degenerates.
So the decisive question isn't "how do we stop populism?", that usually backfires anyway. It's how can we build epistemic infrastructure that's compatible with populist energy? Can functional VDA operate at the speed moral judgement actually forms?
Suppressing populism doesn't change compatibility, it selects for stacks that can survive without functional VDA. That means degenerative configurations that thrive on simulation and coercion. The suppression strategy produces exactly what it fears.
One more crucial insight from the stacks model: different stack types can be compatible with each other. Populist, authoritarian, and conspiratorial stacks can form coalitions, not because they agree on everything, but because they share enemies and structural features.
In my opinion, this is exactly how we should understand MAGA. It's not a single ideology, it's a coalition of stacks. Populist grievance, conspiratorial epistemics, and authoritarian power structures operating together, each reinforcing the others.
The populist layer provides the moral energy, betrayal by elites, voices ignored. The conspiratorial layer provides the epistemic architecture, hidden truths, decoded evidence, designated enemies. The authoritarian layer provides the power logic, strong leadership, loyalty, punishment of dissent.
Each stack on its own has characteristic limits. But in coalition, they compensate for each other's weaknesses. Conspiratorial epistemics provide the "verification" the populist stack lacks. Authoritarian accountability protects leaders the populist stack elevates.
We can see similar dynamics in the UK. Reform UK's increasing flirtation with conspiracy theories isn't random, it reflects a degenerative populist stack seeking compatible epistemic architecture. Conspiracy provides the explanatory framework that populist grievance alone can't sustain.
This is what makes coalition formations so resilient. They're not held together by a coherent ideology. They're held together by structural compatibility, shared enemies, compatible verification logics, and mutually reinforcing accountability directions.
And this is why treating any of these elements in isolation, "it's just populism," "it's just conspiracy theories," "it's just authoritarianism", misses the point. The danger is the coalition, and the structural compatibility that makes it possible.
So if this is the diagnosis, degraded VDA, collapsed information environment, trust betrayal, epistemic defection into compatible stacks, and coalition formations, what's the reconstruction argument? What do we actually do?
First, what we don't do. We don't just try to restore the old system. Remember, the pre-collapse system excluded people and generated the humiliation that drove defection. Rebuilding the same infrastructure reproduces the same failure.
And we don't just fight disinformation. That treats the symptom, not the cause. You can fact-check every false claim on the internet and it won't rebuild the VDA infrastructure that makes shared reality possible.
Reconstruction has to do three things at once: rebuild the constraint pipeline that connects evidence to consequence, address the humiliation and exclusion that drove defection in the first place, and build infrastructure that works at the speed the current environment demands.
That means different responses for different failure modes. Where verification infrastructure has been destroyed, you rebuild production capacity. Where accountability venues have been captured, you need institutional insulation. These aren't the same problem.