In the Prisoner's Dilemma, the optimal strategy for dealing with an opponent who always defects is for you to, in turn, always defect.

It's time for the anti-Trump movement to really understand this. There is this compulsion to "find common ground" or compromise as a way to win over more moderate Trump supporters to our side. This moderated strategy will lose every single time because the MAGA movement as a whole has always embraced an always defect strategy.

@tdverstynen

Technically, this is the Assurance Game or the Stag Hunt.* In the Prisoner's Dilemma, it is always better for the individual to defect. That's the problem: if we are playing an Prisoner's Dilemma, then "cheating makes you smart" because it is always better to cheat.

The point of society is that social codes have the effect of transforming Prisoner's Dilemmas into Assurance Games, where it is better to cooperate ... iff the others are going to cooperate with you.

* I like to describe the Stag Hunt in terms of Infrastructure: Imagine we are the mayors of a couple of towns with a river running between our towns, and we each have enough money to build half a bridge. If you are going to build your half (cooperate), I want to build my half. If you are going to throw a party for your town (defect), I don't want to build half a bridge to nowhere. What I really want to do is convince you to cooperate, so we have a working bridge between our communities.

Nevertheless, you are not wrong that in an Assurance Game, cooperating when your opponent is defecting is a fools errand. If no matter what we say, they won't build their half of a bridge, then we will lose every time by cooperating.

What we really need to do is to create better cooperating-within groups and compete with them directly. They'll like us when we win.

#ChangingHowWeChoose

@adredish

Yeah, I knew I was cutting corners with the analogy. The difference between these games has mostly to do with the outcome structure of the different choices. Politics isn't always structured as a Prisoner's Dilemma (or really Stag Hunt).

I'm actually more curious about strategies for dealing with high entropy opponents in an impartial (& combinatorial) game structured like elections. It is clear that the "flood the zone" strategy is highly effective and that the response to it, so far, is ineffective. So what would be an optimal strategy for responding to the method of exhausting your opponent with a flood the zone approach.

So far game theory has been unhelpful. Wondering if anyone has really looked at this.

@tdverstynen @adredish
Seems like one would need to form robust subnetworks (institutions) resistant to #misinformation (not game theory, though)…

@knutson_brain @adredish

There has to be a way to model this though.

@knutson_brain @tdverstynen

1/3

One of the takes that I found in my theoretical analyses was that what was critical for community successes was
(1) subgroups that would differentiate within group competition choices
(2) the ability to move between subgroups

Importantly (3), while the subgroups need to compete for members, the competition does not need to be violent. Examples of (3) include sports teams [think moneyball] and federated states [particularly proud of MN, for example, as a "we all do better, when we all do better" state].

Given these two conditions, cooperative people tend to migrate into cooperative groups that do better than you're-on-you're-own groups.

@knutson_brain @tdverstynen

2/3

I suspect that one of the problems is that the two groups (R and D) have different cooperativity scales.

One group (R) is fully dedicated to parochial altruism (work within the group, but never beyond), while the other group (D) is still trying to do bipartisanship. Note, however, that one of the problems is that you're-on-you're-own groups tend to devolve into infighting and don't end up being particularly successful.

@knutson_brain @tdverstynen

3/3

The other issue, as was mentioned, is that these models do not usually include either of the two following problems:

1. #misinformation whereby people make group and policy choices on incorrect information, control of which can be used to steer defection (cheating) choices in Prisoner's Dilemma and anti-coordination games (such as Matching Pennies).

2. #ChaosAgents whereby people break group policy choices by acting randomly, often to their own detriment, but with devastating consequences for other groups.

As I understand it, #ChaosAgents in a coordination game would reduce both the overall gain as well as the individual gain. This could work well for someone if they either (a) didn't care about their individual gain, (b) had strong #spite goals (ok for me to lose as long as you lose as well), or (c) had sufficient backup resources that they could weather the losses.

It would be interesting to model these.