today, in attempting to do a close-read of a piece of alleged research to assess how much role a human had in writing it and to what extent the arguments and sources support the key take-aways it suggest

we found a section which purports to describe the oversight that a human performed over an LLM in writing it

and that section has several major non-sequitors that have nothing to do with the subject matter

it's kind of a new low, asking the machine to write the section that describes how you supervised the machine, and then not even reading that section
it really worries us that people are going to take this sort of piece seriously, that real strategic decisions about activism are going to be based on it

we're not naming it because the point is not to have a conversation about a specific piece

it's to remind everyone to engage your brains when reading these things. don't be on autopilot, don't let your assessment of plausibility be based on how formal the writing is or anything like that

we also found that the top-level takeaway of this piece had very little to do with the arguments it advanced, it's just that it's long enough and formalistic enough that you really have to go over it slowly to realize there isn't actually a connection there
and it's painful to go over these things slowly because at every level, they fail to say anything, every list of five bullet points has two that seem vaguely on-topic and three that could make sense if the rest of the piece explains them in some way, and then only after reading the whole thing do you realize it doesn't

oh yeah by the way

these newfangled spam generators do tap into several important effects that haven't been a big problem in the past, such as the linguistic thing that the stochastic parrots paper does a great job of explaining (humans assess the credibility of written information by building a mental model of the person who write it, but here, that model is spoofed)

but, also...

old-fashioned automation bias is a very strong factor in why people take them seriously, too

wikipedia gives a description we like: "Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."

it's well-studied, going back many decades. people just... assume the machine must have had some reason for saying that, and discount their own judgement.

anyway, the point at which you're basing a major activism decision, with serious consequences for the world, on something that came out of one of these machines

is the point at which you should be like hey, wait, this needs extremely close study or else it's worse than useless

strategic decisions should be made while emotionally centered, and as grounded as possible in an assessment of what you do and don't know

get into that headspace before making them. that, really, is the core thing that we personally make sure to do, and it's important for so many reasons that go far beyond just spam generators

@ireneista and if you have to do that ahead of when you need to act on it - "if X then I do Y" - then better that than winging it in response to things you can see're going to mess with your head, yeah

@ireneista I always thought this was a variant of deferring to the expert - treating the automation as an implementation of their (superior) expert knowledge. But meaning that in contexts where I also have substantial knowledge I can check that against the automation and either catch an error or learn something, not a reason to discount my own judgement.

(Related to why I prefer to use and make automations that record how a conclusion was reached)

@ShadSterling we're certain there has been sociology research on the relationship between automation bias and deference to experts, but we're not really up on the literature, alas

@ShadSterling but yeah, fun fact, there are actually laws on the books stemming from the previous round of harmful use of computers in decision-making in the 1970s, which for example require such decisions to include a description of how the conclusion was reached

to our knowledge nobody has attempted to apply these laws to generative ML, which we would very much like to see happen because there's no way the ML can meet that bar

@ShadSterling these laws are the reason that so-called expert systems were invented (these days they're usually called decision trees)

because ie. in medical contexts people wanted to use the automation but they were required to be able to explain it, and using a tree structure means there's a clear rationale

@ShadSterling of course, everyone's been doing bare-minimum lip-service compliance, because those laws don't really go far enough and they've never really been, like... a thing with a lot of public support. alas.

@ireneista Oooh TIL that there’s a name for that bias - thank you!

… would have been helpful to know and read up on when I wrote https://tomrenner.com/posts/400-year-confidence-trick … could have saved a few words (and improved the argument) by having proper definition of the phenomenon I was observing to hand!

LLMs are a 400-year-long confidence trick

In 1623 the German Wilhelm Schickard produced the first known designs for a mechanical calculator. Twenty years later Blaise Pascal produced a machine of an improved design, aiming to help with the large amount of tedious arithmetic required in his role as a tax collector. The interest in mechanical calculation showed no sign of reducing in the subsequent centuries, as generations of people worldwide followed in Pascal and Wilhelm’s footsteps, subscribing to their view that offloading mental energy to a machine would be a relief.

My place to put things
@trenner you're very welcome! a lot of what we do is just remembering specific things and connecting them to other specific things in distant contexts...

@ireneista yes exactly! I’m always grateful when I find the proper name for something - it helps encapsulate the idea and place it properly in the pantheon of related topics.

I’m somewhat in awe of people who can define and name phenomena in a way that helps me and others reason about them. It’s a sign of really thorough reasoning.

@trenner well thank you! we try our best
@ireneista oh! I bet that’s the thing I don’t do that sometimes causes problems - I think about the ideas, not the author. Tends to help me understand ideas better, but also confuses people. Why would people do that?

@ShadSterling well, because it was highly effective once upon a time. in the days when things had human authors, it was super useful as a way to guess whether the person actually had a point that you just weren't understanding yet, or if they just didn't know what they were talking about.

definitely not entirely reliable, in part because it does get entangled with respectability dynamics that socially mediate who is allowed to speak, and that sort of thing

@ShadSterling oh uh building that person-model is also highly effective for recognizing subtext, larger political implications, and all that sort of thing

@ShadSterling plus it just seems to be, fundamentally, built in to the human brain (we are neither a linguist nor a neuroscientist and this is our lay understanding which we're not sure how strong the evidence for it is or isn't)

the task of communicating is the task of taking a mental structure that exists in someone else's brain and building something analogous to it in your own

@ireneista “the task of communicating is the task of taking a mental structure that exists in someone else's brain and building something analogous to it in your own” - I think this is the crux of it.

The thing I thought of was re the theory that the whole reason we have these complex brains and abstract thinking is for social risk assessments.

@ireneista But we’re pretty far in to using these brains for external reality and formal logic, with millennia of intentionally crafting knowledge to be independent of individual person-models. And over the last century or two, we’ve become very dependent on technologies built with that kind of non-social knowledge, which I’d guess would be hard to do without a kind of information ingestion that benefits from skipping the person-model-building effort becoming fairly pervasive

@ireneista

Oh goddamnit, not having to constantly model other persons is a fucking privilege.

Being safe enough to nurture other kinds of mental processes that support understanding systems fundamentally different from the predictability patterns of people.

So homicidal bigots and machismo enforcers are holding us back in a bigger way than I’d realized.

@ShadSterling ah! that sounds true, yeah
@ireneista it's always this; generating content to "make it easier" and "save time" just makes it harder and wastes more time for all the readers. "Better" models don't improve the thoughts that haven't gone into it, just the camouflage and the effort necessary to decide that it's vapid fumes. The assymetrical scaling is severe, both in individual workload and in numbers affected. The only winning move is not to play.
@ireneista the pattern from the pre-llm era i most associate this with is "specification that doesn't actually specify anything"
@ireneista but, thinking about it, I recently got the same feeling reading california ab-1043 (2025-2026), where certain provisions don't really make sense because other parts they would obviously require are just nowhere to be found
https://mastodon.social/@rakslice/116161658804295422
@ireneista of course there are lots of possible reasons for this in a formal document edited by a collaborative bureaucracy, but let's say it feels familiar