RE: https://bsky.app/profile/did:plc:w2lvfvgooagydunsj2dpxtnj/post/3mddzh24qvk2t
@witchescauldron @hamishcampbell
Well, I think I understand what you're implying there, and would definitely agree that information is not enough (fluffy?).
Lifestyles, habits, and world-views have to change too, and for those, more info is not what is generally needed. Pain & loss are unfortunate requirements, aka "learning the hard way" through experience instead of a lecture.
Given that we will do more to avoid pain, standing on the mainstream sidelines is what the comfy majority do. That area is constantly shrinking though, so that's one thing in favor of motivating serious change.
I think that short term mentality also is impeding the solutions we're talking about, which are by nature quite long term, generational aspects. We will have to modify religious world-views, #nationalist /economic, and scientific ones as well, all notoriously slow to change & adapt, even though inevitable in the long run.
These institutions suffer from the lack of #openaccess & #opensource properties in duplicitous manners. They are top-down, closed #hierarchy, #reductionist, as well as exclusionary & rivalrous.
Linear thinking & toy models rule the day because they are faster & easier to implement, and very often do no noticeable harm, or if they do, it's only to a small, powerless minority (or take lifetimes to accumulate into a problem).
This has an #evolutionary basis, so solutions will have to have special characteristics of 'over-ride' (power) while remaining constrained in both scope and time ( #decentralized & periodic). It will require the kind of #trust that #inequality and #plutocracy won't allow.
We'll need to be #interconnected, co-dependent, and often vulnerable. #Network forms must replace #Corporate organizational framing (note that even NGOs & NPOs take the post-agrarian revolution, city-state + Ptolemaic/ #anthropocentric hierarchy).
So a re-write of the narrative is in order. This is quite natural and repeats regularly throughout history. There are software updates & patches, and then there are entirely new apps & devices (driven by hardware advances, or tech in general).
Kuhn has clear ideas about how #science navigates these changes. Puzzle-solving ~/= bug patches. We are well beyond a saturation of patches; it's time for a paradigm shifting philosophy & protocol. My hack at this is called the #Information_Paradigm.
We are, in essence, needing something that can easily appear contradictive-- the early adopters of new paradigms are a very different crowd than the mass crowds that will follow them, based on emotional or fast-brain processes. Getting them onboard takes a different set of enticements than the post-pivot criticality will for the rest of civilization.
IOW, the 'fluffy' stuff comes back as partly but critically necessary, which is why we can't eliminate it completely. Leaders emerge across all scales, and need to be able to find common ground with their distant cohorts, rather than competitive isolation or aggression for individual benefit.
These leaders, and the groups/networks they represent, are very diverse. A default to "Open" principles means, among other things, looking for shared desires & interests to build #cooperation on, rather than immediately dividing along lines of difference.
Imagine an #education system that taught kids to think & reason, from principles like your examples, rather than memorizing nationalistic manifestos, noble bloodlines, and other usually worthless trivia.
A core 'plank' in the #Information_Paradigm is generalizing & simplifying info, and a key area this must be done in is education.
If the #knowledge people have is always tainted with other people's biased priors (because of poor presentation on top of poor comprehension) then #misinformation flourishes, creating an order of magnitude more work to undo later, if at all.
It's probably convenient (opportunistic?) to describe this in terms of training for logic, which we could loosely define as 'consistent understanding' (people agreeing on #meaning).
Sometimes the new #chatbots say things that we see as logical (therefore intelligent), other times they are 180 deg wrong. Why?
#BodyLanguage is a bit more murky than words, but they still share most of the same problems of #communication. Polysemic meanings (homonyms) have both local & regional sources (limiting this to 1 language) of definitions, and this is compounded as we go from general to specific.
For example, the word dog is likely to be the same in any region of the same language, and communicating this basic idea should be fairly straightforward. However, if we wish to include many details about the dog, including behavior or personality, the number of words required to communicate this accurately rises exponentially.
The way we manage such large chunks of #information is through relying on experiencing consistency between intended and perceived meanings. This is how we learn; this is how all things "learn". It requires iteration, flexibility, patience, and #ErrorCorrecting feedback loops.
This is where we see critical differences regarding body language. There is no dictionary of body language meanings, and we rarely have opportunity for broad-based input on the iteration side, nor on the error correcting side. (Yes, I know that there are academic studies of body language "definitions" used by several professions, but these are mainly for controlled environments, well beyond formative years.)
All this boils down to all of us having different #InitialConditions from which we base our instinctive understanding of meaning. Over time, a mistaken word definition would be uncovered with use, as there are many opportunities to be corrected. Body language not only does not have a dictionary, it does not have grammar rules or other mechanisms for consensus to be reached. Body language is not all subconscious either, much of it is learned and intentionally enacted in order to communicate better.
There's also the key difference that when we write or speak, we do so with singular focus (end result, not creation). If we get distracted mid-sentence, most people lose their place in their train of thought.
With body language the 'extra content' can be totally unrelated to what the person is saying-- again, sometimes intentional, other times subconscious. When this kind of disconnect is going on, we have to resort to trying to pull information from other places, and that is not always available.
So IMO, saying you're not good at body language is like saying you're not good at flipping coins because it took many flips before you arrived at 50%, and the person next to you did it in 2 flips. Our experience of #statistical variation is beyond our control.
The world is full of #FalsePositives, where people are walking around viewing the world in binary terms because life flipped 1 head and then 1 tail for them, and they think that's the only way to get to 50% (that's a good alternative description of #neurotypical!).
TL/DR: If you think you're not good at interpreting BL, it's probably because you were dealt a lot of #variation early on. If you put in some time to observe & reflect on it, you will end up with a higher than average ability after just a few corrections!
Gleanings from the #Information_Paradigm.. 🤓
I feel like there's more ​to dig up here. I have no idea how people in ML think, or what their goals are, but it seems like there is more shared ground, and even some complementary angles between them.
If we look at this from the perspective that #MachineLearning is also a branch of science and serves a #meta function whose initial benchmark at least is to improve on the main functions of science, then perhaps the reasons why they appear different will take on new meaning. Certainly, the goals of #science are not static and change with time, culture, and especially #information technology.
Now, there's no doubt that what you're showing here is accurate. In fact, in a very ironic sort of way the scaling difference between the large group of 'science people' and the small group of 'ML people' means that the former with its thousand points of scattered light, went into #overshoot mode long ago, in terms of being able to describe #consensus goals, common methodology, or consistent #philosophy. In short, science is an enormously long freight train without a driver, while the ML group is still a small enough vehicle to be discussing such things with reasonable expectations of broad agreement & flexibility.
Anyway, science is long overdue for systemic #revolution and re-org. Science ( I'm just going to use 'we' from now on) did not predict the demise of its foundational philosophies. We thought it was all about #logic- a straightforward, usually binary approach that once figured out, would easily be taught to every #student we could accommodate in a classroom. Another ironic point is that we knew about this shortcoming long before the #digital age, and the combination of these 2 drivers meant we had underestimated the problems by orders of magnitude.
This is what is behind the fact that there's also tension between the three spheres you highlight on the science side of the comparison. We have evolved our #knowledge to a point where there is no more low-hanging fruit, which means the standard of short, simple causal explanations can no longer be the default. The reality is that when you combine hundreds of simplistic causal narratives that have been reductively isolated from their natural environments and relations, what you get is a "really complicated" looking big picture of science. (standard disclaimer here that simple is not the opposite of #complexity science).
So hopefully, we are now laying the groundwork for the completion of our societies #evolution into the #Information_Paradigm.
Sorry I went off on a long tangent- most of this is probably not relevant for your class.
TL/DR: I'm predicting that a hybrid philosophy will emerge from the tension highlighted in the OP's gif, resulting in a proliferation of generalized knowledge bridges.