People love to talk about what the intentions are. However, when a system constantly produces a different outcome than the one it is "intended" for then it's perfectly reasonable to assume the actual intention is the outcome it continues to produce.
@yogthos I characterize this idea as false. It stretches the basic meanings of words. Simply producing one example of a system whose purpose is not what it does disproves this statement. Take a car that drove for 100,000 miles then crashed into a ditch. The purpose of the car was to transport passengers and cargo, and it did so effectively for 100,000 miles. Now it sits in a ditch. Sitting in a ditch is not its intended purpose.
@escarpment that example is just sophistry
@yogthos This "principle" is just sophistry. Someone stated it confidently enough that people take it as true and interesting.
@yogthos "Purpose" is a word that means people's intentions. This "principle" amounts to "people's intentions are not people's intentions". Or "people's intentions are something other than their intentions." Or "people have secret intentions."
@escarpment people act according to systemic pressures they're exposed to

@yogthos This principle denies the possibility of things not going according to plan.

I'm working on a web game. My purpose of the game is to let players move around. I accidentally flip a != with an == and oopsie no players can move. Is the purpose of the game to be a game where no one can move?

@yogthos I'm glad you brought up the word sophistry. That is exactly what this principle is. You give a principle an acronym, a seemingly reputable coiner, and a Wikipedia page and people will write blog posts about it and share it online, even if the principle is an oxymoron/equivocation/conceptual confusion.
@escarpment the only sophistry here are your examples I'm afraid

@yogthos I have provided multiple examples that clearly illustrate why this principle is bunk. You have doubled down in your defense of it, while providing no supporting examples for it, except vague allusions to "people responding to systemic pressures." I hope I have at least put the idea in your head that this principle needs scrutiny. We should aim to debunk first, and accept whatever survives all our debunking efforts.

That Wikipedia page lacks a "criticisms" or "controversy" section.

@escarpment you've provided straw man arguments, and I've literally explained in detail why your latest example doesn't make sense.
@escarpment you're conflating goals with the implementation here. A proper analogy would be to have a game mechanics where players can't move and excuse this by saying that your intent was for players to be able to move instead of addressing the problem
@yogthos In that analogy, you are simply accusing the creator of the game with having intended to have the game fail and then making an excuse for why it failed. This is like the conspiracy theory of all conspiracy theories, that every mistake is actually intentional. Version 3.1 of my chess engine comes out and actually no moves are possible. And you accuse me of *intentionally* making it that way and just making excuses.
@escarpment again, this is not what the principle states. What it says is that if your goal is to have the player move, and the current mechanics don't allow the player to move then you need to change the mechanics.
@yogthos The principle is not "if a system is broken, fix it." That doesn't get a Wikipedia page and an acronym. It's "the purpose of a system is what it does."
@yogthos that's a principle I can get behind and I'm sure no one disputes. If a system defies its intended purpose, fix it.
@yogthos If that was the principle in question, why didn't you just say that? "If a system is broken, you need to change it."

@escarpment because the point there is that it's necessary to recognize what the system actually does and separate that from whatever intent is being espoused.

Acknowledging that the system is broken instead of talking about the intent of the system is the whole point of the principle. That's the prerequisite to making change.

@yogthos But if you have no idea what the intent is, how do you fix it? Presumably you fix it to align *more* with its purpose, further indicating that intent and what-it-does are different entities and the purpose of a system is *not* what it does.
@yogthos If purpose is X, what it does is Y, and X==Y, why change the system at all? It is meeting its intended purpose 100%.
@escarpment this has literally nothing to do with the principle being stated, I'm not sure why you're having so much trouble understanding it to be honest because it's not very complicated

@yogthos @escarpment

The Escarpment system is not about understanding.

@RD4Anarchy @yogthos "Apples are oranges. Now that I have your attention with a patently false statement that seems profound, let me clarify that by apples I meant whales and by oranges I meant mammals. What don't you understand?"

@yogthos @RD4Anarchy @[email protected]

Escarpment, if I recall correctly, once said that if faced with starvation they’d murder a child and force-feed the corpse to their own child, to keep their own child alive.

@HeavenlyPossum @yogthos @RD4Anarchy yep. According to my notes, also an Israeli government apologist.

@CorvidCrone @HeavenlyPossum

This is the game theory one as well, right?

@RD4Anarchy @CorvidCrone

Maybe? They did claim to have “studied the trolley problem.”

@HeavenlyPossum @CorvidCrone

I'm going to call this kind of person a Trolleyite.

@HeavenlyPossum While you wasted your days at the gym in pursuit of vanity, i studied the trolley problem

@RD4Anarchy @CorvidCrone

@escarpment the premise of the principle, once again, is that you DO know what the intent is and that the system is not serving that intent, but you're EXCUSING that by stating the intent.
@yogthos Why does the principle need so much help to make any sense? A principle such as "if a system has one intent and you know it, but it's not serving that intent, and you make an excuse for it by claiming another intent" is not very interesting or controversial. The principle explicitly states in English: "the purpose of a system is what it does." Those are the only words in the principle. I dispute those words and claim they form a false statement.
@escarpment I don't know why it's so hard for you personally to understand this principle, it made perfect sense to me the first time I read it 🤷‍♂️

@yogthos That's your first clue that it's an empty statement- "makes perfect sense the first time you read it", without any scrutiny or examination of counter arguments.

There is a persistent problem of what I would call pseudo-philosophy. "Property is theft." "We all die alone." And this principle. They resonate with people because they seem satisfyingly profound, when they are really just linguistic perplexities that confuse and bewilder rather than clarifying or informing.

@yogthos If "it makes perfect sense" despite that meaning being in gross defiance of the generally agreed upon meanings of the words that constitute the principle, it's not a principle but a Rorschach test where you can impose whatever meaning you like.

Sure, the words themselves "the purpose of the system is what it does" are false, because we all agree that purpose and actual implementation are separate ideas. So here's what it *actually* means (insert your favored "interpretation").

@escarpment the words "the purpose of the system is what it does" are not false, you just intentionally choose to interpret them in a nonsensical way, which is why what you're doing here amounts to sophistry.

Human language is fundamentally subject to personal interpretation. That's just how language works.

@yogthos I'm interpreting them literally because I see no reason not to interpret them that way. All our discussions have presupposed that purpose and what-it-does are separate and distinct entities. All your clarifications have made markedly different arguments (which I haven't disputed) than the one made by the phrase "the purpose of a system is what it does."
@yogthos My counter argument is simple: "The purpose of a system is not what it does. The purpose is the intent and hope of the designers of the system. What-it-does is the result or outcome of the design process, and is subject to change as the designers iterate on their design of the system to match its intended purpose."
@escarpment no, all our discussion is predicated on your conflating the intent with the implementation of the system which is the actual purpose of it.
@escarpment you're extrapolating a lot based on your personal difficulty or understanding a particular concept here

@escarpment the point there is that you have to acknowledge the way the system is ACTUALLY functioning and the outcomes it's producing, and to separate that from whatever intent there was.

Only then can you begin to change the system in a positive way. I don't know how much more clearly I can spell this out for you.

@yogthos So the point is to acknowledge how a system is functioning and how it varies from its intent. That too is an uncontroversial principle.

"One ought to monitor and evaluate a system against its intended purposes to determine how well it is functioning to meet those purposes."

That too is different than "the purpose of a system is what it does", which is false.

@yogthos @escarpment Sorry to butt in on this awesome dialogue, but I'm just interested.

This seems crucial to me:

"When a system's side effects or unintended consequences reveal that its behavior is poorly understood, then the POSIWID perspective can balance political understandings of system behavior with a more straightforwardly descriptive view."

This suggests the use of POSIWID as, essentially, a debugging methodology, to fix the system so that it
does serve it's intended purpose, rather than it's currently implemented purpose. We take the perspective, "What would I say the purpose of this system is, if I didn't already know the intended purpose?"

This seems to me a misuse of "purpose" to say it is what it does, so I have to agree with your interlocutor, there. "Function" might be a better word.

But then, this isn't some sort of philosophical principle, either. It's a cybernetics concept, so it may or may not be generalized to all complex systems. That's debatable.
@notroot @yogthos Thanks for the input. I guess to give the most favorable interpretation, upon rereading the Wikipedia article, the statement "there is no point in claiming that the purpose of a system is to do what it constantly fails to do" is a valid frustration. I see this pattern a lot though- in frustration, people *exaggerate*. They go from "this system is so bad that it's almost *as if* it were designed to be bad!" And that morphs into "it *must have been* designed to be bad!"
@notroot @yogthos It is true that a lot of systems do not meet their intended purposes because people are bad at designing against real requirements, instead using intuition and feelings as a guide. Like the criminal justice system: "we should probably punish people for using drugs because that makes intuitive sense", without necessarily running a pilot study to see if that has the desired effect of reducing drug use.
@escarpment @yogthos That's really a great example. It has been convincingly argued that in this case there really was a "secret intention" to the "War on Drugs" -- to disenfranchise Black Americans and the poor, more generally, by branding them felons and taking away the right to vote. A continuation of Jim Crow tactics.

Which is where I agree with you over your interlocutor -- POSIWID is very valuable when analyzing a system without full knowledge of intent. In particular, in human systems, where full motives of parties involved in designing systems are frequently shrouded. Machiavelli, baby.
@notroot @yogthos In that case, though, the principle is not POSIWID. It is "sometimes there is a secret purpose." STIASP. I do not dispute STIASP. I dispute POSIWID.

@escarpment @notroot again, there is no secret purpose. There is the intent and then there's the implementation.

The goal is to understand what results the implementation produces, which is the implicit purpose of the system, and to reconcile that against the intent.

The purpose of the system (actual implementation) is always what the system is doing.

This can often be at odds with the stated intent. Understanding whether that's the case or not is the purpose of POSIWID.

@yogthos @notroot So you view a distinction between purpose and intent? I view them as synonyms. "The purpose of a system is what it does" === "the intent of a system is what it does". How are intent and purpose different?
@escarpment @notroot I view the distinction between the goals and implementation. The system is the implementation, and the purpose of the implementation is what it's actually doing. This is completely separate from your intent and goals. I don't know why this is so hard for you to wrap your head around.
@yogthos @escarpment It's also a very "cybernetics" way of looking at things... as if the system itself had agency or even intelligence. And indeed it's a cybernetics concept... a field of study that has more utility in AI research than in human social systems.

That's where the "purpose" quibble comes into play, I think. It ascribes agency to the system, itself, which shapes individual behavior through feedback. There's sort of two purposes: the intended purpose of individuals who designed and built the system, and the rhetorical "purpose" of the system, as if it had intentions. It's a useful perspective, IMO, especially for problem-solving, but it's not a fundamental scientific fact, or anything. It's also a very cybernetics way of looking at it.

Anyway, I do get it, and even agree with it. I just like the topic and it's deep enough to dive into, so here I am...

@notroot @escarpment I was approaching this from the dialectical materialism perspective, but cybernetics one is a good way to frame it as well.

The rules of the system create an entity with its own purpose that's the expression of these rules.

And this entity can be quite different from might've been originally envisioned.

@yogthos @escarpment I think that's a very useful way of analyzing systems, yup. It might not be strictly true that a complex system is a distinct entity with agency, but it sure seems like it if you're one of the things being pushed around in the butterfly-caused hurricane!

Makes me think about Chaos Theory, too... how nonlinear complex systems may (or may not) spontaneously and unpredictably develop orderly dynamics from a chaos of individual interactions. How much more complex that system when the elements themselves have agency!
@notroot @escarpment right, and we don't necessarily have to assign agency in a sense of volition of consciousness, just that the system exhibits particular behaviors that are a result of the properties of the system and the environment that it inhabits
@yogthos @escarpment Absolutely! Brownian motion n all that. The difference is, in the case of society, that the elements of the system DO have agency of their own, and in fact its individual agency that drives many of the system dynamics. Sometimes driving them off their intended (heh) rails.

This complicates shit a bit. Individuals are already complex. Now make them the elements of various complex systems. What is this systemic agent? It's basically composed of human interactions. But it's not really independent. Not really. It just seems like it, because we're included in it, and in fact,
we made it up.

The Santa Fe Institute was doing some cool chaos maths in biology and even human society, but studying people I think will always be an inexact science.
@notroot @yogthos I have learned the idea that "humans are teleological thinkers." We assign purpose to everything because we ourselves form purposes. So "clouds are *for* giving shade", "wood is *for* burning". Covid "wants" to replicate. Assigning agency to systems seems like another case of taking this analogy too far and incorrectly anthropomorphizing.
@notroot @yogthos I totally agree about chaos theory though. I think the output of many minds has more in common with a hurricane than with the thought of an individual person. It is hard to ascribe intent to the combined behavior of an entire group or country if that entity is sufficiently large.
@escarpment @yogthos Yah that's where I think it's useful to think about the system as an entity. Not true, but useful.

I think all human systems are ultimately emergent, meaning they're basically natural, even organic. Every single one relies on chains of individual interactions to perform its purpose. Paying taxes. Getting insurance to pay. The prison system. They aren't just imperfect systems... *they're barely "systems" at all*. Just people shuffling around like weevils, farting, sleeping, etc.

I think that's my biggest beef with cybernetics. It carries useful analogies too far.
@escarpment @yogthos It is, I agree, when we mistake the model for the thing. That's the trick... not to make that mistake.

Otherwise a system like "the universe and everything in it" could simply be ascribed agency and result in ridiculous concepts such as "God". Heh.

We have to remember that we're the ones saying, "there's a system called 'government' comprised of subsystems called ..." Individuals aren't
really cogs in the machine. No more than the machine is really alive and independent. It just seems that way because we're basically cells in the bodies of these systems, which are composed of ourselves and other individuals with their own agency.

It's the difference between being a bit of flotsam in the sea, and a fish. Both are pushed around by the currents, but the fish can go looking for other currents.

@notroot @escarpment I like to look at this from the perspective of natural selection myself. You have the environment and it exerts some pressures on the agents within the environment. These pressures end up selecting for particular behaviors. I find this is a useful way to look at complex systems.

There is also a dialectical aspect to this where the behavior of the agents also shapes the system in turn.

@notroot @escarpment and this is why it's so useful to look at the system in terms of its rules and the behaviors that result from these rules. Understanding this relationship allows us to consciously tweak the rules to tune the purpose of the system towards the intent.
@yogthos @escarpment I agree. And on a small scale, or with computers, that's pretty straightforward. But... at national scale, it's time-consuming and difficult to, say, amend the US Constitution. Laws are easier, but still hard. Really, affecting human systems is hard even with the force of law. People disobey laws.

We're messy. Chaotic.

@notroot @escarpment for sure human systems are complex, but that doesn't preclude us from being able to look at the outcomes the systems produce, and try to improve the areas where we identify problems.

I think the goal should be to define a desirable state of things and then to reflect on whether the rules of the system are getting us closer or further from that.

When we make changes we can reflect and compare to see if they move us closer or further from the goal.