Starting a new hashtag - #tyrannyOfAesthetics

Aesthetics sits firmly in attention grabbing, novelty trend & transient pace layer but governs most decisions because we can’t tolerate boredom or boring-https://longnow.org/ideas/pace-layers/

1. Reinventing XMPP protocol (plumbing) for instant messaging. This is like reinventing HTTP everytime you create a web application-https://gultsch.social/@daniel/115751087451745641

2. Reinventing software packaging & distribution (logistics), 1 computer language at a time-https://circumstances.run/@hipsterelectron/115751055245955928

Pace Layering: How Complex Systems Learn and Keep Learning

Pace layers provide many-leveled corrective, stabilizing feedback throughout the system. It is in the contradictions between these layers that civilization finds its surest health. I propose six significant levels of pace and size in a robust and adaptable civilization.

Long Now

Related: 3.related Illusion of scaler speed ≠ dimensional velocity - https://mastodon.social/@dahukanna/115751407204010708

Related: 4. Comfort Food for the Thinking Class: The Great Intellectual Stagnation, the economics of attention - https://www.joanwestenberg.com/comfort-food-for-the-thinking-class-the-great-intellectual-stagnation/ via https://mato.social/@josemurilo/115751054614151022
#tyrannyOfAesthetics

Comfort Food for the Thinking Class: The Great Intellectual Stagnation

Wander into any bookstore (I dare you.)  The non-fiction table will be all but dominated by the usual suspects: Malcolm Gladwell's latest exploration of how some counterintuitive thing is actually the opposite of what you'd expect, a David Brooks meditation on character and virtue, something by Michael Lewis about how

Westenberg.

Related: 5. Thread on “Cognitive Recall Atrophy - CRA” - https://mastodon.social/@dahukanna/115756945829133248

Related: 3.1. This reply - https://toot.cat/@zygmyd/115752235523994274
“This is really fascinating. Every single person in a company that ships AI-emitted code, policy, product, and so on will be an outsider. The managerial experience of having first hand knowledge of exactly none of the work going on and still being accountable but for every employee.”

#tyrannyOfAesthetics

zygmyd (@[email protected])

@[email protected] @[email protected] @[email protected] This is really fascinating. Every single person in a company that ships AI-emitted code, policy, product, and so on will be an outsider. The managerial experience of having first hand knowledge of exactly none of the work going on and still being accountable but for every employee.

Toot.Cat

Related: 6-mindbending addiction amplification or why a qualified attorney would knowingly destroy an entire law firm and 3 other lawyers careers -

‘An attorney couldn't stop using Grok (?!) to help draft filings, producing "a flood of tainted filings" & apparently triggering the implosion of a law firm & 3 lawyers' careers 🤖😵 The court called her misconduct "particularly egregious & prolific"’- https://cases.justia.com/federal/district-courts/mississippi/msndce/1:2024cv00074/49169/79/0.pdf

https://techpolicy.social/@ericgoldman/115758771102887176

#tyrannyOfAesthetics

Related: 7a - “Instead of having a hot divorcee Saturday night instead I wrote about how after announcing my divorce on Instagram, AI impersonated me in a hidden metadata field. Fuck the techbro oligarchs. https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/“

- https://glammr.us/@TeamMidwest/115754711143502617
#tyrannyOfAesthetics

I announced my divorce on Instagram and then AI impersonated me « Eira Tansey

Related: 7b - “But what I vehemently object to in this situation is the use of the first-person voice without my review or permission. The language used in the description makes it sound as if I wrote it (“In this post, I share my personal journey…”).”

- https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/

#tyrannyOfAesthetics

I announced my divorce on Instagram and then AI impersonated me « Eira Tansey

Related: 7c - “Because I have fiercely protected my authorship throughout my life and what my name is attached to, any generative AI writing that purports to be in my voice without my informed consent is a profound violation of my authorial voice, agency, and frankly it feels like fraud or impersonation.”

- https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/

#tyrannyOfAesthetics

I announced my divorce on Instagram and then AI impersonated me « Eira Tansey

Related: 7d - “As an archivist who has spent almost twenty years thinking about accuracy in information, it makes my skin crawl that there is a metadata field with the sole purpose of generating SEO-engagement purporting to be my voice that doesn’t disclose the authorship was actually non-consensual AI.”

- https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/

#tyrannyOfAesthetics

I announced my divorce on Instagram and then AI impersonated me « Eira Tansey

Related:8-But something else responsible for a large & steadily increasing share of public knowledge production, also has no organic reality-detection capability=large language model(LLM). This is central to what we mean when we argue that LLMs destroy knowledge: that for every verifiably accurate account of the world, LLMs produce a Borgesian library of superficially plausible alternates–sufficiently plausible that effort & energy must be invested #tyrannyOfAesthetics
- https://social.coop/@adamgreenfield/115771986742330648
Adam Greenfield (@[email protected])

But there’s something else in our lives which is responsible for a large and steadily increasing share of public knowledge production, and which also has no organic reality-detection capability, and that’s the large language model. This is central to what we mean when we argue that LLMs destroy knowledge: that for every verifiably accurate account of the world, LLMs produce a Borgesian library of superficially plausible alternates – sufficiently plausible that effort and energy must be invested

social.coop
Related:9-Don’t think & decide yourself, only follow instructions of others.
“In 1941 book, Escape from Freedom, German psychoanalyst Erich Fromm argued that the rise of fascism could partly be explained by people preferring to surrender freedom in exchange for reassuring certainty of subordination. AI-LLM (produces information but without necessarily deepening human understanding) offers new way to surrender burden to think & decide for yourself.”
- https://mastodon.social/@andrewstroehlein/115784707689297472
#tyrannyOfAesthetics

Related:10-outcome “too good to be human talent (or exceeds your novice knowledge or skill level), then it must be Oi👁️”

‘We are in time where actual artists being accused of AI work
-physical artist with amazing talent & imagination, booted from Cara
-film industry DP photographer defending posts on IG, has amazing sense of colour & composition.
Take a breath before accusing humans of being too good at what they do=how AI is dumbing users down.’

- https://mastodon.social/@CStamp/115783838519998286
#tyrannyOfAesthetics

Related:11 - “answer shaped objects” or responses don’t always contain valid answers, “quelle surprise”?

“It has been unpleasant realising just how many people consider “answer-shaped objects” and “answers” to be the same thing.” - https://eigenmagic.net/@abstractcode/115795215226319814

#tyrannyOfAesthetics

Colin (@[email protected])

@[email protected] It has been unpleasant realising just how many people consider answer-shaped objects and answers to be the same thing.

eigenmagic.net

Related 12 - “A beautiful design that people can’t read isn’t beautiful; it’s broken.” Talking about Design architect of “Liquid Glass” - https://medium.com/macoclock/apple-just-fired-the-designer-who-made-ios-26-unreadable-heres-what-truly-happened-f6606bbc5ddd#

“Did any designers at Apple push back on the new design for the Mac in Tahoe?

Ugh. It's almost like they forgot all UI principles that have been in place since like the 80s.

This ‘Minimalism’ is cancer.” - https://mastodon.social/@marioguzman/115840528587021524

#tyrannyOfAesthetics

Apple Just Fired the Designer Who Made iOS 26 Unreadable. Here’s What Truly Happened.

Now he’s going to Meta, Stephen Lemay is taking over, and Apple employees are publicly celebrating. Here’s the full story.

Medium

Related 13-a fast, confident response from word calculator machine-Large Language Model (LLM), no intelligence involved, artificial or otherwise, that produces answer objects with veracity between 0-99% (accuracy, reliability & consistency) is anti-intelligence

1. Human cognition is not expressing a bunch of calculated grammatically correct words - https://mastodon.social/@dahukanna/115873119314743574
2. See 11 - Answer-shaped objects don’t only contain 100% accurate responses - https://mastodon.social/@dahukanna/115798550660399251

#tyrannyOfAesthetics

Related 14-https://wien.rocks/@noheger/115877698373215218

Do not use your established learned behavior + affordances or real-time eye & hand proprioception. Just somehow guess where to place your mouse to enlarge a window on a computer.

See https://mastodon.ar.al/@aral/115909848315197076 and https://mastodon.ar.al/@aral/115909954386205523

Also see related 12 - https://mastodon.social/@dahukanna/115838095145648099

And https://mastodon.social/@dahukanna/115878704605344299

#tyrannyOfAesthetics

Norbert Heger (@[email protected])

Attached: 1 image Struggling to resize windows on macOS Tahoe? Here’s why. https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/

wien.rocks

Related 15-https://mastodon.scot/@kim_harding/115892987818847731

“Language is _primarily a tool for
communication_(expression or external using cognitive thoughts & actions, like drawing, speaking) rather than thought
https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf

Something the LLM charlatans fail to understand. LLM can string words together based on massive data sets & sophisticated stats, but they cannot think, reason, do science, or replace human thinking in any way...”

#tyrannyOfAesthetics

kim_harding ✅ (@[email protected])

Language is primarily a tool for communication rather than thought https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf Something the LLM charlatans fail to understand. LLM can string words together based on massive data sets and sophisticated stats, but they cannot think, reason, do science, or replace human thinking in any way...

mastodon.scot

Related 16 - https://hachyderm.io/@astronomerritt/115907971982496032

“so yeah for my entire life, the left lens in my glasses has had whatever prescription would keep my eyes moving together. literally, it was just a cosmetic lazy-eye-prevention thing because if the brain doesn’t use the eye anyway, so what? just stop the bastard thing from drifting off by itself.”

So called modern ophthalmology medical practice is symptomatic cosmetic fix, rather addressing the “root cause” = “LGTM PR update”.

#tyrannyOfAesthetics

Steph (winter version) (@[email protected])

Content warning: a bit on prescriptions

Hachyderm.io

Related 17-https://mastodon.social/@nobsagile/115909610037016130

‘LinkedIn post celebrating how a Product Owner “no longer needs developers” because AI can now generate, deploy and adjust code.
It’s a good example of where thinking goes wrong. AI can lower the cost of producing code.
It does not lower the cost of owning a product.
Products still need operation, support, incident handling, knowledge redundancy, security decisions & continuous learning from real users.’
“A prototype is not a product”
#tyrannyOfAesthetics

Ron Dyck (@[email protected])

Adults Lose Skills to AI. Children Never Build Them. https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them

CoSocial

Related 19- https://mas.to/@maaretp/116305640508660326

“What are client organizations rewarding as ‘quality’ in bid competition if they grade a document you send and award best points for AI generated appearance of answers written by someone who publicly declares they aren’t knowledgeable in the offered service?

They are better than beginner in setting context for generation and splitting the work in agentic way, but the end result is still at least contractually obligating.”

#tyrannyOfAesthetics

Maaret Pyhäjärvi (@[email protected])

What are client organizations rewarding as ‘quality’ in bid competition if they grade a document you send and award best points for AI generated appearance of answers written by someone who publicly declares they aren’t knowledgeable in the offered service? They are better than beginner in setting context for generation and splitting the work in agentic way, but the end result is still at least contractually obligating.

mas.to
@dahukanna UGHHHHHHHHHH. There's so much of this going around the community right now. It's a massive schism between the 0-0.3s and the people who actually have to support this bullshit that they keep chucking out into the market.
@mayintoronto why are they not the ones deploying, running and operating that prototype in production - 🤦🏾‍♀️.
Get their favorite OI👁️ to do that and complete the cycle.

@dahukanna
It seems to me that there are two broad alternative explanations of the difference between AI (general) and human intelligence (HI).

1. There is something 'other', 'additional', 'outside' the mechanics of memory, storage, processing that constitutes consciousness and therefore intelligence. If we subscribe to that model, there's little possibility that AI can achieve or exceed parity with HI until we do understand what it is. We may get some simulacrum of HI but it will always miss the mark. Or

2. HI and its consciousness is a sophisticated emergent property of basically simple foundational mechanics. AI may have some of those but may not have other key mechanisms, such as, perhaps (personal speculation):

2.1 building certainty from repeated experience (but not having a compilation time cut-off) - having an open ended feedback opportunity. Certainty grows and new experience can influence, but the stronger the trust & certainty, the less the delta influence from the new experience (PTSD might be a manifest exception to this model)

2.2 Bootstrapping from concrete input from senses, trust is built. These trusted foundations might build into trusted combinatorial abstractions, such as 'table' or 'dog'. These would combine sensory stuff like, edge, colour, shape, smell, texture with learned stuff like the concept of external named entities like 'me', 'mummy', 'table', 'dog', etc. I suspect the trust in ever more fractally combinatorial abstractions builds as experience happens. It builds a tower, providing the trust in the foundations are strong and that trust extends up the abstraction stack, even unto genuinely abstract concepts, like religion, philosophy, art/music appreciation, nostalgia, bigotry, political affiliation, & other memes (the Dawkins original definition). If this model is a reasonable approximation for the basis for human consciousness - merely an emergent behaviour, then AI might get there with a) feedback iterations forever
b) parity of importance/significance between concrete input and the infinite hierarchy of subsequent abstractions based on experiential trust over time (see a)

@gregalotl
Current Large Language Model (LLM) synonymous with AI isn’t a generality, only expressed squiggles representing sounds to communicate with other humans.
It doesn’t cover internal vast richness that‘s a human being, talk less their abstract “intelligence”. Effect might be emergent because it evolves over time but building blocks are a constant.
E.g.Literally equating abstract human classification framework of “periodic table of atomic structures” with living organic beings=nonsense.
@dahukanna
Sorry Dawn, I can't correlate your response with what I was trying to propose; which , in summary was:
Is human consciousness emergent behaviour from the mechanics we already understand or is it something unrelated and different from the mechanics. I then suggested a mechanism by which it might work, if it is emergent behaviour and I suggested that if an emergent behaviour model is (somewhat) correct then an analogous maturity path might eventually evolve for GenAI to display functionally equivalent consciousness. I understand that LLMs are a very long way short of any of this but may constitute one of the trust building mechanisms, in their learning phase. Trouble is, as I alluded to, LLMs shut the door after learning, so there's no subsequent refinement.

@gregalotl limited by 500 chars.

Emergence==changed “observed” behaviour over time but the building blocks producing that behaviour are constant.
Literally equating a complicated, living organism comprising of many interconnected sub-system with the abstract categorized framework = nonsense. Periodic table of atomic elements helps explain some of what we observe in living systems. It does not define it.

AI-LLM is equivalent of chemical periodic table, not emergent/evolving human intelligence.

@dahukanna
I really wasn't extrapolating LLM's capacity in the way you suggest I have. See my post that crossed over with your response
@gregalotl apologies Greg, did not mean to suggest that was your extrapolation, more my philosophical position on your commentary.
@dahukanna
Just carefully re read your post. Another point. The 'vast richness' you talk about is potentially the emergence of the complex 3D (possibly 4D) network of abstractions. If you agree that we have a limited array of senses at and before birth, then by definition, they must be the starting point and understanding built on combinations of those. We and other animals wouldn't get very far if we stopped at that point, so it seems like an obvious extrapolation to deduce that we don't and we trust our senses first then we trust some very basic sense combinations: eg
light/dark contrast = edge
Some config of edge = straight edge
Different config = curve,
Position of config = orientation, etc
Then, in a simple example, sensory input of light contrast can resolve to differently oriented edges, later learning could resolve edges into volumes and later, with the much later (18months-2years) additional learning could resolve the light and dark contrast into the abstract concept, named 'table'. Later in life, table, carpet, smell of stale beer, etc resolve to a trusted concept of a seedy bar. It hangs together for me and I don't think it's necessary or even logical to assume consciousness is a whole other property or that the complexity of HI is, at least in theory, not replicable synthetically. Electronics may not have the efficiency or even capability to meaningfully do it. But I suspect it's theoretically possible, if deeply frightening.
@gregalotl @dahukanna
The problem, I think, is in the asking “why not AI”. I think the better credible question is “why should we believe that simple networks, particularly transformer models, are sufficient for AI”. It’s much narrower. It may not be impossible for our current tech to be intelligent, but there’s little reason to believe we have a magic recipe already.
@ThreeSigma
I take your point but the essence of my proposition is that it's possible that deep complexity can emerge from simple mechanisms. A 3 body mechanism displays unpredictable outcomes - chaos. Chaos is tamed in part by strange attractors, so it isn't just white noise; there are features! Assuming for a moment that my proposition has some vague and simplistic truth to it, it means that each of us starts with uncomplicated signals and constructs a vast and individual network. Patterns are discernable but it is individually unique - character, beliefs, flaws, unhelpful patterns baked in and diagnosed as an identifiable medical problem, and so on. I know I'm just constructing an explanatory (for me, at least) artifice and simplification. But hey, I'm just putting out there to compare notes 😀
@dahukanna

@gregalotl
Thanks for the details.
I agree that humans create an internal cognitive, perception mental model network that is informed from inherited DNA codes and real-time sensed signals from their containing environment.

LLM have stochastic variability from 5/7/12/20/30 billion parameters for one point in time model of inference, not perception, from the raw text data supplied during imprinting, not training.

@ThreeSigma

@dahukanna @gregalotl

The thing is, you can test emergent properties even if you can’t predict them. The AI bros are working with the assumption that if you make the model big enough and show the model enough pictures of dogs and cats, telling it which is which, it will eventually learn what a fox is.

But there is no evidence yet that this happens, and not for lack of trying.

I don’t know of any models that exhibit chaos (in the mathematical sense)

@ThreeSigma
I have a theory/model about some of that too. 😬🫣

Humans have the capacity for creativity and I wondered if one might find an explanation in a combination of strange attractors, and an ability to 'defocus', so that a compelling answer in a completely different context becomes enough of a pattern match to offer the foundation for a solution in the original, more rigid problem/solution space. Say I cannot conceive a way to stop Trump invading Greenland, I may have expertise in animal behaviour or political science or poker that gives me an analogous pattern that helps me map back to the original context with a novel & creative solution, say calling his bluff or distracting him until election or somesuch. In strange attractor terms, the strange attractor in the parallel experience isn't an unachievable jump from the starting point but just requires that defocus to see parallel patterns emerge.

I don't know. It's just a thought experiment but I find it fascinating and worth sharing for feedback.
@dahukanna