2 Followers
6 Following
46 Posts
If "evolutionary psychology" is pseudoscience (which is debatable to begin with), it's that way not because our evolutionary history doesn't inform our psychology but because our understanding of both those things is too immature for the questions most people are are trying to answer. But that in itself depends on the questions and the level of answer one finds acceptable. I've found Michael Tomasello's book "The Evolution of Agency" perfectly proportionate in the kinds of questions it seeks to answer given the information it has, and I think the wild speculations I extrapolate from it are totally fine to share in random internet conversations.

Nah, I think that overstates the extent to which our ancestors were the hunter more than the hunted and ignores the social dimension. An early human might have been at risk for predators when they were out alone hunting or gathering but when you're with the group I'd think that's a much smaller threat. Having to deal with social threats from within the group, now, that's ever-present. And still present today!

Also, after reading a book about the evolution of agency that suggests the evolutionary innovation of humans is that we're a goal-seeking system that's able to function as a part of a larger goal-seeking system (collective action)... I wonder how much that can account for existential dread. We have a diffuse drive to be part of something greater than ourselves but it's not always clear what that should be.

That is absolutely hilarious. Yeah Reddit, I totally buy that you want internet communities to not depend on platforms like Reddit. This would be totally monetizeable for you, not that you care about monetization and not that monetization has proven to work at cross-purposes with making good internet websites/communities. And once you mentioned blockchain, well that's when I recognized the subliminal cues suggesting a well-thought-out proposal that positively impacts the world.

@eschaton

@siracusa @enhancedscurry @chucker

Considering LLMs has made me reflect on how our own brain has separate modules for "understanding" (predicting, modelling) different things, such that not only is our understanding not reducible to language, the very notion of "meaning" for a human appeals to those non-linguistic models. Language production & understanding for us consists of converting nonverbal internal states/model into language and back. Of course there are complex feedbacks where language influences nonverbal understanding but it still involves separate systems and understanding doesn't reduce to language.

What I'm not sure of is whether this is an inherent feature of anything that could act on its "understanding" as well as we do (which LLMs currently can't, whatever label we assign to their internal processes), or whether it's just the way we happen to do it and a system without such a separation could perform as well as we do. I can't help feeling it's an inherent feature but I haven't found a logical justification for it.

@eschaton So glad someone else thinks that ! If you're on Usenet these days, what newsgroups do you know of that are active &/|| interesting ? (going back into it I feel discoverability is a huge problem. I don't know if anyone's done a "front page of Usenet" website where you can see activity across all newsgroups and not need to check them out one by one...).
Anyone worried about Reddit disappearing: Usenet continues to exist. It’s just there, waiting to be used. Go for it.
This question is so generic I can't help but feel there is a more specific idea behind it. Can you talk about what made you want to ask this, what kind of answers you're expecting?

This kind of "why do we seek out happiness/pleasure but stories of artificial happiness/pleasure utopias always read like dystopias" question baffled me a lot until it occurred to me recently - happiness and pleasure are evolved systems that evolved for a reason. It feels absurd to treat them like a goal because they're not a goal, they're a measure. It's a bit like you're heating something and looking at the thermometer to check it's heating right, and someone says "hey why don't we paint the thermometer to have the value you want, that's much simpler and you'll reach your goal fine" and the answer is yes, but no. Yes, the thermometer will have the value you were aiming for and it may have looked like that was your goal but actually no, your goal won't be achieved because the real goal was never the thermometer it was heating the thing.

In our case, happiness, pleasure and so on evolved to drive us towards certain states and behaviors that it was evolutionarily beneficial for our ancestors to be in. Being physically comfortable, safe and healthy, being well-regarded by peers, achieving personal and collective goals, having friends and family who love you/have your back and you them, acting in line with what one feels is best, etc etc etc.

I think that has two consequences: 1) it's entirely possible that perfect happiness/pleasure isn't something we can ever attain, or that it's even a coherent state, via real OR artificial means. Because happiness/pleasure evolved under constraints that didn't include the requirement that such a state be attainable or even coherent. It doesn't mean it's impossible, but it definitely means there is no guarantee that it is. Certainly our current experience with happy-making drugs suggests it's much harder than you'd think. And 2) it puts into question the assumption that this state is "good". These dystopias always seem so sterile, like what's the point of all those people being happy, why have this system go to all that trouble to make it happen? Well, why should we care about anything, right, it's all value judgements. And there are obvious reasons humans would value happiness. But there are also obvious reasons we'd value safety, comfort, loving friends and family, having children, achieving personal and collective goals, social status, discovering new things, leaving a legacy, etc etc. The "artificially happy people" dystopia assumes that we value happiness above all those other things but that's an illusion borne from the fact happiness is a unified system driving us to all those things. A bit like thinking money is the most important thing because everybody is trying to get some, when in reality the money is just the unified vehicle for various things we really want - products and services, security, status, etc.

So insofar as all of those different goals are things we care about because we evolved to, it seems both more parsimonious and more robust to focus on goals that happiness/pleasure evolved as instruments to achieve rather than trying to hack the thermometer.

Arguably that's the difference between actual utopias and "we're all happy, that's good right?" dystopias. Actual utopias explore the conditions for human flourishing, and either portray happiness as obviously following from that or straight-up don't focus on happiness at all. Happy dystopias are dystopias precisely because the conditions they show are so antithetical to human flourishing that no reader would buy the characters are happy without the in-Universe happiness drugs or brainwashing or whatever.

The thing is that, technically, "human fluorishing" (understood as the evolutionary tendency of our specie to thrive & expand) is not something that can be maintained indefinitelly.

I meant "human flourishing" as a shorthand for the list of things I listed, as in "things that tend to make individual humans feel fulfilled" not the expansion and thriving of humans as a species. I don't think the latter is always seen as utopian; for example if I were to list utopias like The Culture, The Federation in Star Trek, Le Guin's short stories, the Abbey of Thélème... Some of those do feature human expansion although even there it's not uncomplicated (The Federation not only explores but also colonizes uninhabited worlds and I think it's fair to see "the expansion of the human species" as part of its utopian vision; I think the same is true of The Culture but the books also challenge the idea), others straight-up reject it like many of Le Guin's utopias, and I think ancient versions of the genre like the Abbey of Thélème don't think that much about it at all. However all of those utopias portray humans as having or being able to achieve a variety of "personal fulfillment" goals such as those I listed; those are what I meant. I do think our evolutionary tendency to thrive & expand may be worth valuing for its own sake, contra Le Guin, but that's a different conversation.

Having said that I don't think the "rat utopia" experiments say that much about human flourishing. For one thing those "utopias" didn't meet all of the rats' needs - they had unlimited food and safety from outside threats but they didn't have unlimited space or the kind of stimulation they evolved to thrive and maintain their social structures in. I guess it's good nuance to understand that "flourishing" doesn't reduce to "unlimited food and safety from predators" but that organisms have other needs too (notably space), but I think it's something most people realize already. Note that stories that do feature "the evolutionary tendency of our species to thrive & expand" as utopian tend to have the opposite of a "rat utopia", with space colonization/exploration making space unlimited but with challenging conditions.

I'm also not convinced such behavioral sinks apply to humans, or at least apply to them as completely as they did to those rats. Some unique features we have that seem relevant here include our level of sociality, playfulness and adaptability. Humans are much more social than our closest relatives (& maybe all mammals) so overpopulation doesn't have the same impacts on us as others. We also (literally) play a lot more than any other species, in the sense of engaging in behaviors for the sake of random goals instead of the more straightforward ones that usually motivate us - in that category I'd list not only what we understand as play and games, but also things like art, science, sports, random hobbies, etc. We don't only individually play, but as cultures we devote time and resources to goals "for their own sake" instead of concrete survival/expansion. I'd guess such random behavior serves as a natural outlet in cases where conditions are "too favorable", one that we probably literally evolved to engage in (evolutionarily speaking play has the purpose of learning new things, and you do it when conditions are favorable enough that you don't need to focus on survival), meaning it's likely to feel satisfying to some extent at least. Finally, Alison Gopnik says adaptability is a hallmark of humans as a species and I think that claim holds up - human societies have proven able to adapt to a huge variety of environments, both physical and social. Our own societies are extremely different from the kind we evolved with and have tons of issues, but they still basically function, with a huge proportion of humans in them leading lives that range from satisfactory to fulfilling, in a way that wouldn't be true of a comparable number of chimpanzees. So I have doubts that we'd completely collapse as a species because of something so generic as conditions being "too favorable". Humans and human societies can be broken, no doubt about that, but that usually involves extreme scenarios. Unfavorable ones, at that.

It might be worth noting at this point that a lot of us, particularly of the "posting randomly on the internet" variety, do functionally live in "rat utopias" with unlimited food, no predators but limited space and tons of people around. And I think most would attest that while it's not the key to perfect happiness, it also hasn't devolved into the horrifying hellscape the rats experienced.

This isn't to say I think human utopia is possible/coherent/compatible with our nature. I just don't think the rat experiment is a very good example for that argument.

This kind of "why do we seek out happiness/pleasure but stories of artificial happiness/pleasure utopias always read like dystopias" question baffled me a lot until it occurred to me recently - happiness and pleasure are evolved systems that evolved for a reason. It feels absurd to treat them like a goal because they're not a goal, they're a measure. It's a bit like you're heating something and looking at the thermometer to check it's heating right, and someone says "hey why don't we paint the thermometer to have the value you want, that's much simpler and you'll reach your goal fine" and the answer is yes, but no. Yes, the thermometer will have the value you were aiming for and it may have looked like that was your goal but actually no, your goal won't be achieved because the real goal was never the thermometer it was heating the thing.

In our case, happiness, pleasure and so on evolved to drive us towards certain states and behaviors that it was evolutionarily beneficial for our ancestors to be in. Being physically comfortable, safe and healthy, being well-regarded by peers, achieving personal and collective goals, having friends and family who love you/have your back and you them, acting in line with what one feels is best, etc etc etc.

I think that has two consequences: 1) it's entirely possible that perfect happiness/pleasure isn't something we can ever attain, or that it's even a coherent state, via real OR artificial means. Because happiness/pleasure evolved under constraints that didn't include the requirement that such a state be attainable or even coherent. It doesn't mean it's impossible, but it definitely means there is no guarantee that it is. Certainly our current experience with happy-making drugs suggests it's much harder than you'd think. And 2) it puts into question the assumption that this state is "good". These dystopias always seem so sterile, like what's the point of all those people being happy, why have this system go to all that trouble to make it happen? Well, why should we care about anything, right, it's all value judgements. And there are obvious reasons humans would value happiness. But there are also obvious reasons we'd value safety, comfort, loving friends and family, having children, achieving personal and collective goals, social status, discovering new things, leaving a legacy, etc etc. The "artificially happy people" dystopia assumes that we value happiness above all those other things but that's an illusion borne from the fact happiness is a unified system driving us to all those things. A bit like thinking money is the most important thing because everybody is trying to get some, when in reality the money is just the unified vehicle for various things we really want - products and services, security, status, etc.

So insofar as all of those different goals are things we care about because we evolved to, it seems both more parsimonious and more robust to focus on goals that happiness/pleasure evolved as instruments to achieve rather than trying to hack the thermometer.

Arguably that's the difference between actual utopias and "we're all happy, that's good right?" dystopias. Actual utopias explore the conditions for human flourishing, and either portray happiness as obviously following from that or straight-up don't focus on happiness at all. Happy dystopias are dystopias precisely because the conditions they show are so antithetical to human flourishing that no reader would buy the characters are happy without the in-Universe happiness drugs or brainwashing or whatever.