People love to talk about what the intentions are. However, when a system constantly produces a different outcome than the one it is "intended" for then it's perfectly reasonable to assume the actual intention is the outcome it continues to produce.
@yogthos I characterize this idea as false. It stretches the basic meanings of words. Simply producing one example of a system whose purpose is not what it does disproves this statement. Take a car that drove for 100,000 miles then crashed into a ditch. The purpose of the car was to transport passengers and cargo, and it did so effectively for 100,000 miles. Now it sits in a ditch. Sitting in a ditch is not its intended purpose.
@escarpment that example is just sophistry
@yogthos This "principle" is just sophistry. Someone stated it confidently enough that people take it as true and interesting.
@yogthos "Purpose" is a word that means people's intentions. This "principle" amounts to "people's intentions are not people's intentions". Or "people's intentions are something other than their intentions." Or "people have secret intentions."
@escarpment people act according to systemic pressures they're exposed to
@yogthos @escarpment Sorry to butt in on this awesome dialogue, but I'm just interested.

This seems crucial to me:

"When a system's side effects or unintended consequences reveal that its behavior is poorly understood, then the POSIWID perspective can balance political understandings of system behavior with a more straightforwardly descriptive view."

This suggests the use of POSIWID as, essentially, a debugging methodology, to fix the system so that it
does serve it's intended purpose, rather than it's currently implemented purpose. We take the perspective, "What would I say the purpose of this system is, if I didn't already know the intended purpose?"

This seems to me a misuse of "purpose" to say it is what it does, so I have to agree with your interlocutor, there. "Function" might be a better word.

But then, this isn't some sort of philosophical principle, either. It's a cybernetics concept, so it may or may not be generalized to all complex systems. That's debatable.
@notroot @yogthos Thanks for the input. I guess to give the most favorable interpretation, upon rereading the Wikipedia article, the statement "there is no point in claiming that the purpose of a system is to do what it constantly fails to do" is a valid frustration. I see this pattern a lot though- in frustration, people *exaggerate*. They go from "this system is so bad that it's almost *as if* it were designed to be bad!" And that morphs into "it *must have been* designed to be bad!"
@notroot @yogthos It is true that a lot of systems do not meet their intended purposes because people are bad at designing against real requirements, instead using intuition and feelings as a guide. Like the criminal justice system: "we should probably punish people for using drugs because that makes intuitive sense", without necessarily running a pilot study to see if that has the desired effect of reducing drug use.
@escarpment @yogthos That's really a great example. It has been convincingly argued that in this case there really was a "secret intention" to the "War on Drugs" -- to disenfranchise Black Americans and the poor, more generally, by branding them felons and taking away the right to vote. A continuation of Jim Crow tactics.

Which is where I agree with you over your interlocutor -- POSIWID is very valuable when analyzing a system without full knowledge of intent. In particular, in human systems, where full motives of parties involved in designing systems are frequently shrouded. Machiavelli, baby.
@notroot @yogthos In that case, though, the principle is not POSIWID. It is "sometimes there is a secret purpose." STIASP. I do not dispute STIASP. I dispute POSIWID.

@escarpment @notroot again, there is no secret purpose. There is the intent and then there's the implementation.

The goal is to understand what results the implementation produces, which is the implicit purpose of the system, and to reconcile that against the intent.

The purpose of the system (actual implementation) is always what the system is doing.

This can often be at odds with the stated intent. Understanding whether that's the case or not is the purpose of POSIWID.

@yogthos @notroot So you view a distinction between purpose and intent? I view them as synonyms. "The purpose of a system is what it does" === "the intent of a system is what it does". How are intent and purpose different?
@escarpment @notroot I view the distinction between the goals and implementation. The system is the implementation, and the purpose of the implementation is what it's actually doing. This is completely separate from your intent and goals. I don't know why this is so hard for you to wrap your head around.
@yogthos @escarpment It's also a very "cybernetics" way of looking at things... as if the system itself had agency or even intelligence. And indeed it's a cybernetics concept... a field of study that has more utility in AI research than in human social systems.

That's where the "purpose" quibble comes into play, I think. It ascribes agency to the system, itself, which shapes individual behavior through feedback. There's sort of two purposes: the intended purpose of individuals who designed and built the system, and the rhetorical "purpose" of the system, as if it had intentions. It's a useful perspective, IMO, especially for problem-solving, but it's not a fundamental scientific fact, or anything. It's also a very cybernetics way of looking at it.

Anyway, I do get it, and even agree with it. I just like the topic and it's deep enough to dive into, so here I am...

@notroot @escarpment I was approaching this from the dialectical materialism perspective, but cybernetics one is a good way to frame it as well.

The rules of the system create an entity with its own purpose that's the expression of these rules.

And this entity can be quite different from might've been originally envisioned.

@yogthos @escarpment I think that's a very useful way of analyzing systems, yup. It might not be strictly true that a complex system is a distinct entity with agency, but it sure seems like it if you're one of the things being pushed around in the butterfly-caused hurricane!

Makes me think about Chaos Theory, too... how nonlinear complex systems may (or may not) spontaneously and unpredictably develop orderly dynamics from a chaos of individual interactions. How much more complex that system when the elements themselves have agency!
@notroot @escarpment right, and we don't necessarily have to assign agency in a sense of volition of consciousness, just that the system exhibits particular behaviors that are a result of the properties of the system and the environment that it inhabits
@yogthos @escarpment Absolutely! Brownian motion n all that. The difference is, in the case of society, that the elements of the system DO have agency of their own, and in fact its individual agency that drives many of the system dynamics. Sometimes driving them off their intended (heh) rails.

This complicates shit a bit. Individuals are already complex. Now make them the elements of various complex systems. What is this systemic agent? It's basically composed of human interactions. But it's not really independent. Not really. It just seems like it, because we're included in it, and in fact,
we made it up.

The Santa Fe Institute was doing some cool chaos maths in biology and even human society, but studying people I think will always be an inexact science.
@notroot @yogthos I have learned the idea that "humans are teleological thinkers." We assign purpose to everything because we ourselves form purposes. So "clouds are *for* giving shade", "wood is *for* burning". Covid "wants" to replicate. Assigning agency to systems seems like another case of taking this analogy too far and incorrectly anthropomorphizing.
@notroot @yogthos I totally agree about chaos theory though. I think the output of many minds has more in common with a hurricane than with the thought of an individual person. It is hard to ascribe intent to the combined behavior of an entire group or country if that entity is sufficiently large.
@escarpment @yogthos Yah that's where I think it's useful to think about the system as an entity. Not true, but useful.

I think all human systems are ultimately emergent, meaning they're basically natural, even organic. Every single one relies on chains of individual interactions to perform its purpose. Paying taxes. Getting insurance to pay. The prison system. They aren't just imperfect systems... *they're barely "systems" at all*. Just people shuffling around like weevils, farting, sleeping, etc.

I think that's my biggest beef with cybernetics. It carries useful analogies too far.
@escarpment @yogthos It is, I agree, when we mistake the model for the thing. That's the trick... not to make that mistake.

Otherwise a system like "the universe and everything in it" could simply be ascribed agency and result in ridiculous concepts such as "God". Heh.

We have to remember that we're the ones saying, "there's a system called 'government' comprised of subsystems called ..." Individuals aren't
really cogs in the machine. No more than the machine is really alive and independent. It just seems that way because we're basically cells in the bodies of these systems, which are composed of ourselves and other individuals with their own agency.

It's the difference between being a bit of flotsam in the sea, and a fish. Both are pushed around by the currents, but the fish can go looking for other currents.

@notroot @escarpment I like to look at this from the perspective of natural selection myself. You have the environment and it exerts some pressures on the agents within the environment. These pressures end up selecting for particular behaviors. I find this is a useful way to look at complex systems.

There is also a dialectical aspect to this where the behavior of the agents also shapes the system in turn.

@notroot @escarpment and this is why it's so useful to look at the system in terms of its rules and the behaviors that result from these rules. Understanding this relationship allows us to consciously tweak the rules to tune the purpose of the system towards the intent.
@yogthos @escarpment I agree. And on a small scale, or with computers, that's pretty straightforward. But... at national scale, it's time-consuming and difficult to, say, amend the US Constitution. Laws are easier, but still hard. Really, affecting human systems is hard even with the force of law. People disobey laws.

We're messy. Chaotic.

@notroot @escarpment for sure human systems are complex, but that doesn't preclude us from being able to look at the outcomes the systems produce, and try to improve the areas where we identify problems.

I think the goal should be to define a desirable state of things and then to reflect on whether the rules of the system are getting us closer or further from that.

When we make changes we can reflect and compare to see if they move us closer or further from the goal.

@yogthos @escarpment Agreed. It's like incremental development, except it takes a long-ass time to get results to see if you fixed the bug. Years. And for example regulatory changes or environmental protection changes may be contested by powerful interests with competing agendas for the system.

@notroot @yogthos

> I think the goal should be to define a desirable state of things

Most likely people have shockingly different opinions about this. Desirable is sadly subjective. I suspect this is like a "ask 100 people get 100 different answers" type of question.

The moral anti-realist would say "of course they disagree on this subjective question because there are no objective mind-independent values."

@escarpment @notroot that's why ideas such as the democratic process has been invented to figure out what majority of people want the direction of things to be.
@yogthos @escarpment Yup... a system to govern other systems heheh.
@yogthos @notroot Yes, and the result is often near perfect partisan gridlock, where people are uncannily perfectly divided across every possible opinion, suggesting that opinions expand to fill the realm of possible opinions. Wherever there is a window of subjectivity, people seize the opportunity. Maskers, anti-maskers, anti-vaxxers, environmentalists, coal rollers, hawks, doves, communists, capitalists, libertarians, pro-choice, pro-life, pro-gun, anti-gun.
@escarpment @yogthos That's another good example! The system (democracy) is partially failing in it's intended purpose, but because of scale it is difficult and takes time to change. Momentum to change the system requires participation of many individuals over years to achieve meaningful objectives.

Should it be easier to change the system? Maybe, but what if, in a mass satanic panic, the majority changes the system in such a way that it destroys democracy? So how hard
should it be to change?
@notroot @escarpment we should be careful not to equate western parliamentary democracy which is a particular implementation of the concept with the broader idea of democracy though
@yogthos @escarpment I don't think it's necessary to split hairs on that point, in the current anti-democratic climate.

@notroot @yogthos It's unclear the extent to which it is succeeding or failing. That is a subjective question. It's also unclear what its intended purpose was, though we can play detective by looking at founding documents:

"in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity."

@notroot @yogthos

Establish justice: very subjective. Literally true in that there is a justice department and justice system, but people will endlessly dispute what is just, and moral anti-realists can argue that justice is neither good nor bad as a value.

insure domestic Tranquility: actually pretty good, except for the civil war. There are not mortar shells flying anywhere in the US.

provide for the common defence: also pretty good. Have not been invaded or conquered.

@escarpment @notroot have you considered that this is actually a problem with the implementation of democracy in western countries?
@yogthos @notroot I doubt it. A moral anti-realist would say it is neither good nor bad. It is just a fact of how democracy plays out, and it is only surprising inasmuch as the average person ascribes to the consensus fallacy that most people probably agree with them on subjective matters.
@yogthos @escarpment That's also a useful way of looking at it -- the environment doesn't have agency, it's just forces of nature, doing their thing.

It's also an analogy, tho, because humans have individual agency, unlike the rain and wind in a hurricane, or the stone of an earthquake. All our systems are really a bunch of small individual interactions.

Personally, I think chaos theory is the closest model. We've learned to create orderly dynamics out of the chaos of human interaction. We teach our children how to do it.

@notroot @escarpment one of my personal favorite examples of how systems affect human behavior is the transition from communism to capitalism after the fall of USSR.

The same people who were positively contributing to society under the communist system quickly learned to change their behaviors and turned into oligarchs under the new one.

To me this is a great real world example of how systemic pressures affect behavior.

@yogthos @escarpment Also, it's interesting to me that -- once again -- the crux of the debate hinges on scale. Very many philosophical dilemmas seem to hinge on scale: the one and the many.

There was a seemingly semantic disagreement about the word "purpose," but it turned out that one purpose (intended) was individual, and the other purpose (implemented) was systemic.

Anyone who disagrees that a system is an entity that can have it's own agency and purpose might also disagree, in all fairness, with POSIWID. It assumes that a system can.

So I see both points of view, but happen to agree with the utility of the model and methodology, even if some of its assumptions are debatable.
@escarpment @yogthos I think it comes back to peculiarities of cybernetics and the way they think about systems. They study feedback, and rhetorically ascribe agency to systems. In their way of talking, a system can have its own purpose, separate from the purpose of the individuals who designed and built it.

Again, from my admittedly basic understanding, the whole idea of POSIWID is properly understood in context of cybernetics. In general it's more applicable to thinking about neural nets than human society, though it's proponents would no doubt disagree.

What interests me is that POSIWID seems pretty useful to me, when applied to society. I think it's not wrong, as a methodology. It's like having an independent analysis of the system, which makes no assumptions about the intended purpose. CRT would seem an example, to me, on the surface at least.
@escarpment @yogthos I don't, as long as we're talking about it as a debugging methodology, and not a universal principle. Ineptitude is more likely than secret purposes, but in either case the approach works within that sort of "bugfixing" context to identify either the "unintended side-effects" of ineptitude, or the hidden purpose of the system.

I definitely agree that it can't be bandied about like a truism. It's like the "Don't Repeat Yourself" principle of computer science -- it needs context.