@knutson_brain @Andrewpapale @tdverstynen

3/3 But it may also be time to incorporate #sociology studies of world-understanding and news gathering into decision theory practical consequences.

To my knowledge there isn't a strong connection between people studying these consequences of #propaganda and #news eco-systems with modern theories of #DecisionMaking and #DecisionScience.

But if anyone is interested, please reach out to me. It is something I'm very interested in pursuing. #ChangingHowWeChoose

@tdverstynen

Technically, this is the Assurance Game or the Stag Hunt.* In the Prisoner's Dilemma, it is always better for the individual to defect. That's the problem: if we are playing an Prisoner's Dilemma, then "cheating makes you smart" because it is always better to cheat.

The point of society is that social codes have the effect of transforming Prisoner's Dilemmas into Assurance Games, where it is better to cooperate ... iff the others are going to cooperate with you.

* I like to describe the Stag Hunt in terms of Infrastructure: Imagine we are the mayors of a couple of towns with a river running between our towns, and we each have enough money to build half a bridge. If you are going to build your half (cooperate), I want to build my half. If you are going to throw a party for your town (defect), I don't want to build half a bridge to nowhere. What I really want to do is convince you to cooperate, so we have a working bridge between our communities.

Nevertheless, you are not wrong that in an Assurance Game, cooperating when your opponent is defecting is a fools errand. If no matter what we say, they won't build their half of a bridge, then we will lose every time by cooperating.

What we really need to do is to create better cooperating-within groups and compete with them directly. They'll like us when we win.

#ChangingHowWeChoose

Interesting observation: One of the conclusions of my #ChangingHowWeChoose book is that one key to successful moral social communities is groups that provide easy and low-cost group switching. @pluralistic points out that this is the key to #mastodon 's success over #BlueSky and #xitter

Basically, the logic from the book is the following: Because the fundamental pro-social conflict is between individual goals and group goals, groups with cooperative communities (due to their social codes that align these goals) outperform groups without. If we have the opportunity to switch groups, cooperative people will migrate to cooperative groups, do better than the selfish, and the overall prosocial moral structure will improve.

In https://pluralistic.net/2024/11/02/ulysses-pact/#tie-yourself-to-a-federated-mast, @pluralistic suggests that this also reduces #enshittification .

Fascinating!

Pluralistic: Bluesky and enshittification (02 Nov 2024) – Pluralistic: Daily links from Cory Doctorow

@BorisBarbour @deevybee @PubPeer

No, I'm not talking about evaluation issues at all here. I'm actually talking about something much different.

I'm talking about the perception of our field as one full of fraudsters that have to be policed or they will get away with something. I think these are rare (but real) and I think it is important that their description be balanced with positive signals about a cooperative community.

I want people to post thank yous to social media when someone helps them with a piece of code they are having trouble with. I want people to post thank yous to social media when a positive discussion leads to a new idea. I want people to post thank yous to social media when a lab lets their postdoc visit for a week and teaches them a new technique. Or sends them the new viruses without requesting co-authorship or payment. These things all happen lots of times. It's part of the amazing cooperative science community we have. #cooperativescience

What I discovered in writing my #ChangingHowWeChoose book, is that that perception matters. A lot.

70% OFF Changing How We Choose: The New Science of Morality

In Changing How We Choose, David Redish makes a bold claim: science has 'cracked' the problem of morality. Redish argues that moral questions have a scientific basis, and that morality is best viewed as a technology-a set of social and institutional forces that create communities and drive cooperation. This means that some moral structures are better than others and that the moral technologies we use have real consequences on whether we make our societies better or worse places for the people living within them. Drawing on this new scientific definition of morality and real-world applications, Changing How We Choose is an engaging listen with major implications for how we see each other, how we build our communities, and how we live our lives. Many people think of human interactions in terms of conflicts between individual freedom and group cooperation, where it is better for the group if everyone cooperates but better for the individual to cheat. Redish shows that moral codes are technologies that change the game so that cooperating is good for the community and for the individual. Drawing on new insights from behavioral economics, sociology, and neuroscience, he shows that there is a 'new science of morality', and that this new science has implications-not only for how we understand ourselves but also for how we should construct new moral technologies.

Audiobooks.com

@Virginicus @HeavenlyPossum @dahukanna @marcas

There are lots of technologies that help us escape the trap of tribalism (technically called parochial altruism - altruism within a group and xenophobia outside of it). These technologies range from the practice of democracy and representation to telling stories that create empathy and practical experiences with diversity. They include a lot of practical experiences humans have with community construction.

Studying #mesoeconomics provides us with actual experiments to determine what technologies change the games so that the "tragedy of the commons" doesn't appear.

One of the most fascinating things is that humans are not just members of a single group (tribalism), but rather are members of a very complex web of groups (including groups of groups). This complex web allows the creation of cohesive communities at much larger levels. (It's the key to the scaling up that we've done over the millennia.)

I talk about a lot of these in my #ChangingHowWeChoose book. There are also a good description of these technologies in David Sloan Wilson's Does Altruism exist? And Steven Pinker's Better Angels book.

#behavioraleconomics #communities

@HeavenlyPossum @dahukanna

Exactly! In fact, the data is very strong that a small community actually protects a commons BETTER than privatizing it does. Typical community social controls (the basics that make small communities work) provide good protection for commons.

Protecting a commons in a larger community requires more complex social codes and structures, but is still very doable.

There's a lot of really good science on this.

#ChangingHowWeChoose

@HeavenlyPossum @dahukanna

Fascinatingly, "Enclosure" (British system to privatize the commons) was provably bad for the system economically. It turns out that public goods held by a small community is particularly well cared for because no one in the community wants to damage it and be shamed by the community, but a single user can exploit it. (Key data from Elinor Ostrom, Ford Runge, James Acheson, RC Allen.)

That "tragedy of the commons" turns out to be a wrong description of how well public goods are held by communities.

[Public goods in large communities are less well held. Thinking on this, I wonder if an analogy can be made to small social media groups #mastodon relative to large social media groups #twitter. Hmmm.]

(I have a deep dive into this in Chapter 3 of #ChangingHowWeChoose.)

#behavioraleconomics #communities #commons #tragedyofthecommons

@shaneomara

Milgram's studies are vastly misunderstood. In fact, the subjects were not willing to "obey authority, under orders from on high, to do terrible things to defenseless human beings". They were willing to do terrible things to defenseless human beings because that thought it was "for (what they thought was) the greater good".

Milgram did many well-controlled studies under a variety of conditions. (His 1974 book is excellent and details these studies beautifully.) In fact, when he ordered people to do things, they responded negatively and rejected the orders. These results are highly consistent under the several replication studies that have been done.

It's not about obedience. It's about being convinced (wrongly in My Lai and the Holocaust!) that these sacrifices must be made for a "greater good".

I have a thorough deep dive into Milgram in Chapter 8 of my new book __Changing How We Choose: The New Science of Morality__.

#ChangingHowWeChoose

https://www.amazon.com/Changing-How-We-Choose-Morality/dp/0262047365

Amazon.com

#ChangingHowWeChoose by @adredish examines the “new science of morality” that will change how we see each other, how we build our communities, and how we live our lives. #SfN22 https://www.penguinrandomhouse.com/books/710572/changing-how-we-choose-by-a-david-redish/
Changing How We Choose by A. David Redish: 9780262047364 | PenguinRandomHouse.com: Books

The “new science of morality” that will change how we see each other, how we build our communities, and how we live our lives. In Changing How We Choose, David Redish makes a bold claim: Science...

PenguinRandomhouse.com