People love to talk about what the intentions are. However, when a system constantly produces a different outcome than the one it is "intended" for then it's perfectly reasonable to assume the actual intention is the outcome it continues to produce.
@yogthos I characterize this idea as false. It stretches the basic meanings of words. Simply producing one example of a system whose purpose is not what it does disproves this statement. Take a car that drove for 100,000 miles then crashed into a ditch. The purpose of the car was to transport passengers and cargo, and it did so effectively for 100,000 miles. Now it sits in a ditch. Sitting in a ditch is not its intended purpose.
@escarpment that example is just sophistry
@yogthos This "principle" is just sophistry. Someone stated it confidently enough that people take it as true and interesting.
@yogthos "Purpose" is a word that means people's intentions. This "principle" amounts to "people's intentions are not people's intentions". Or "people's intentions are something other than their intentions." Or "people have secret intentions."
@escarpment people act according to systemic pressures they're exposed to
@yogthos @escarpment Sorry to butt in on this awesome dialogue, but I'm just interested.

This seems crucial to me:

"When a system's side effects or unintended consequences reveal that its behavior is poorly understood, then the POSIWID perspective can balance political understandings of system behavior with a more straightforwardly descriptive view."

This suggests the use of POSIWID as, essentially, a debugging methodology, to fix the system so that it
does serve it's intended purpose, rather than it's currently implemented purpose. We take the perspective, "What would I say the purpose of this system is, if I didn't already know the intended purpose?"

This seems to me a misuse of "purpose" to say it is what it does, so I have to agree with your interlocutor, there. "Function" might be a better word.

But then, this isn't some sort of philosophical principle, either. It's a cybernetics concept, so it may or may not be generalized to all complex systems. That's debatable.
@notroot @yogthos Thanks for the input. I guess to give the most favorable interpretation, upon rereading the Wikipedia article, the statement "there is no point in claiming that the purpose of a system is to do what it constantly fails to do" is a valid frustration. I see this pattern a lot though- in frustration, people *exaggerate*. They go from "this system is so bad that it's almost *as if* it were designed to be bad!" And that morphs into "it *must have been* designed to be bad!"
@notroot @yogthos It is true that a lot of systems do not meet their intended purposes because people are bad at designing against real requirements, instead using intuition and feelings as a guide. Like the criminal justice system: "we should probably punish people for using drugs because that makes intuitive sense", without necessarily running a pilot study to see if that has the desired effect of reducing drug use.
@escarpment @yogthos That's really a great example. It has been convincingly argued that in this case there really was a "secret intention" to the "War on Drugs" -- to disenfranchise Black Americans and the poor, more generally, by branding them felons and taking away the right to vote. A continuation of Jim Crow tactics.

Which is where I agree with you over your interlocutor -- POSIWID is very valuable when analyzing a system without full knowledge of intent. In particular, in human systems, where full motives of parties involved in designing systems are frequently shrouded. Machiavelli, baby.
@notroot @yogthos In that case, though, the principle is not POSIWID. It is "sometimes there is a secret purpose." STIASP. I do not dispute STIASP. I dispute POSIWID.

@escarpment @notroot again, there is no secret purpose. There is the intent and then there's the implementation.

The goal is to understand what results the implementation produces, which is the implicit purpose of the system, and to reconcile that against the intent.

The purpose of the system (actual implementation) is always what the system is doing.

This can often be at odds with the stated intent. Understanding whether that's the case or not is the purpose of POSIWID.

@yogthos @notroot So you view a distinction between purpose and intent? I view them as synonyms. "The purpose of a system is what it does" === "the intent of a system is what it does". How are intent and purpose different?
@escarpment @notroot I view the distinction between the goals and implementation. The system is the implementation, and the purpose of the implementation is what it's actually doing. This is completely separate from your intent and goals. I don't know why this is so hard for you to wrap your head around.
@yogthos @escarpment It's also a very "cybernetics" way of looking at things... as if the system itself had agency or even intelligence. And indeed it's a cybernetics concept... a field of study that has more utility in AI research than in human social systems.

That's where the "purpose" quibble comes into play, I think. It ascribes agency to the system, itself, which shapes individual behavior through feedback. There's sort of two purposes: the intended purpose of individuals who designed and built the system, and the rhetorical "purpose" of the system, as if it had intentions. It's a useful perspective, IMO, especially for problem-solving, but it's not a fundamental scientific fact, or anything. It's also a very cybernetics way of looking at it.

Anyway, I do get it, and even agree with it. I just like the topic and it's deep enough to dive into, so here I am...

@notroot @escarpment I was approaching this from the dialectical materialism perspective, but cybernetics one is a good way to frame it as well.

The rules of the system create an entity with its own purpose that's the expression of these rules.

And this entity can be quite different from might've been originally envisioned.

@yogthos @escarpment I think that's a very useful way of analyzing systems, yup. It might not be strictly true that a complex system is a distinct entity with agency, but it sure seems like it if you're one of the things being pushed around in the butterfly-caused hurricane!

Makes me think about Chaos Theory, too... how nonlinear complex systems may (or may not) spontaneously and unpredictably develop orderly dynamics from a chaos of individual interactions. How much more complex that system when the elements themselves have agency!
@notroot @yogthos I have learned the idea that "humans are teleological thinkers." We assign purpose to everything because we ourselves form purposes. So "clouds are *for* giving shade", "wood is *for* burning". Covid "wants" to replicate. Assigning agency to systems seems like another case of taking this analogy too far and incorrectly anthropomorphizing.
@escarpment @yogthos It is, I agree, when we mistake the model for the thing. That's the trick... not to make that mistake.

Otherwise a system like "the universe and everything in it" could simply be ascribed agency and result in ridiculous concepts such as "God". Heh.

We have to remember that we're the ones saying, "there's a system called 'government' comprised of subsystems called ..." Individuals aren't
really cogs in the machine. No more than the machine is really alive and independent. It just seems that way because we're basically cells in the bodies of these systems, which are composed of ourselves and other individuals with their own agency.

It's the difference between being a bit of flotsam in the sea, and a fish. Both are pushed around by the currents, but the fish can go looking for other currents.

@notroot @escarpment I like to look at this from the perspective of natural selection myself. You have the environment and it exerts some pressures on the agents within the environment. These pressures end up selecting for particular behaviors. I find this is a useful way to look at complex systems.

There is also a dialectical aspect to this where the behavior of the agents also shapes the system in turn.

@notroot @escarpment and this is why it's so useful to look at the system in terms of its rules and the behaviors that result from these rules. Understanding this relationship allows us to consciously tweak the rules to tune the purpose of the system towards the intent.
@yogthos @escarpment I agree. And on a small scale, or with computers, that's pretty straightforward. But... at national scale, it's time-consuming and difficult to, say, amend the US Constitution. Laws are easier, but still hard. Really, affecting human systems is hard even with the force of law. People disobey laws.

We're messy. Chaotic.

@notroot @escarpment for sure human systems are complex, but that doesn't preclude us from being able to look at the outcomes the systems produce, and try to improve the areas where we identify problems.

I think the goal should be to define a desirable state of things and then to reflect on whether the rules of the system are getting us closer or further from that.

When we make changes we can reflect and compare to see if they move us closer or further from the goal.

@yogthos @escarpment Agreed. It's like incremental development, except it takes a long-ass time to get results to see if you fixed the bug. Years. And for example regulatory changes or environmental protection changes may be contested by powerful interests with competing agendas for the system.

@notroot @yogthos

> I think the goal should be to define a desirable state of things

Most likely people have shockingly different opinions about this. Desirable is sadly subjective. I suspect this is like a "ask 100 people get 100 different answers" type of question.

The moral anti-realist would say "of course they disagree on this subjective question because there are no objective mind-independent values."

@escarpment @notroot that's why ideas such as the democratic process has been invented to figure out what majority of people want the direction of things to be.
@yogthos @notroot Yes, and the result is often near perfect partisan gridlock, where people are uncannily perfectly divided across every possible opinion, suggesting that opinions expand to fill the realm of possible opinions. Wherever there is a window of subjectivity, people seize the opportunity. Maskers, anti-maskers, anti-vaxxers, environmentalists, coal rollers, hawks, doves, communists, capitalists, libertarians, pro-choice, pro-life, pro-gun, anti-gun.
@escarpment @yogthos That's another good example! The system (democracy) is partially failing in it's intended purpose, but because of scale it is difficult and takes time to change. Momentum to change the system requires participation of many individuals over years to achieve meaningful objectives.

Should it be easier to change the system? Maybe, but what if, in a mass satanic panic, the majority changes the system in such a way that it destroys democracy? So how hard
should it be to change?

@notroot @yogthos It's unclear the extent to which it is succeeding or failing. That is a subjective question. It's also unclear what its intended purpose was, though we can play detective by looking at founding documents:

"in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity."

@notroot @yogthos

Establish justice: very subjective. Literally true in that there is a justice department and justice system, but people will endlessly dispute what is just, and moral anti-realists can argue that justice is neither good nor bad as a value.

insure domestic Tranquility: actually pretty good, except for the civil war. There are not mortar shells flying anywhere in the US.

provide for the common defence: also pretty good. Have not been invaded or conquered.

@notroot @yogthos

promote the general Welfare: very subjective. Literally true: we have welfare and social security. But not everyone is doing so well economically.

secure the Blessings of Liberty: seemingly provably false, given the number incarcerated. But true for all the people in the US who are not incarcerated and able to just walk around freely and go to coffee shops and stuff.

@escarpment @yogthos Seems like a simpler example might be the system of roads and traffic laws.

There's a human system that lends itself to analysis. Some tuning is a no-brainer... slow down in front of a school... but other pieces are hard as hell and take some ingenuity to get right, so they're usually wrong. Everyone who's lived in a city knows of a bad intersection like that.

Some of the effects of changes to traffic laws might be felt pretty quickly... like a change in speed limit for those who obey it. Others might take a long time, like new overpass. Or seatbelts. Or EVs.

Most human systems aren't so accessible to analysis, particularly among marginalized communities where they may be less trust.

@notroot @yogthos

> other pieces are hard as hell and take some ingenuity to get right, so they're usually wrong

Or there is no "right" and "wrong". The intersection has to have a winner and a loser because two or more parties have diametrically opposed subjective opinions about what to prioritize for the intersection.