today, in attempting to do a close-read of a piece of alleged research to assess how much role a human had in writing it and to what extent the arguments and sources support the key take-aways it suggest

we found a section which purports to describe the oversight that a human performed over an LLM in writing it

and that section has several major non-sequitors that have nothing to do with the subject matter

it's kind of a new low, asking the machine to write the section that describes how you supervised the machine, and then not even reading that section
it really worries us that people are going to take this sort of piece seriously, that real strategic decisions about activism are going to be based on it

we're not naming it because the point is not to have a conversation about a specific piece

it's to remind everyone to engage your brains when reading these things. don't be on autopilot, don't let your assessment of plausibility be based on how formal the writing is or anything like that

we also found that the top-level takeaway of this piece had very little to do with the arguments it advanced, it's just that it's long enough and formalistic enough that you really have to go over it slowly to realize there isn't actually a connection there
and it's painful to go over these things slowly because at every level, they fail to say anything, every list of five bullet points has two that seem vaguely on-topic and three that could make sense if the rest of the piece explains them in some way, and then only after reading the whole thing do you realize it doesn't

oh yeah by the way

these newfangled spam generators do tap into several important effects that haven't been a big problem in the past, such as the linguistic thing that the stochastic parrots paper does a great job of explaining (humans assess the credibility of written information by building a mental model of the person who write it, but here, that model is spoofed)

but, also...

old-fashioned automation bias is a very strong factor in why people take them seriously, too

wikipedia gives a description we like: "Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."

it's well-studied, going back many decades. people just... assume the machine must have had some reason for saying that, and discount their own judgement.

@ireneista Oooh TIL that there’s a name for that bias - thank you!

… would have been helpful to know and read up on when I wrote https://tomrenner.com/posts/400-year-confidence-trick … could have saved a few words (and improved the argument) by having proper definition of the phenomenon I was observing to hand!

LLMs are a 400-year-long confidence trick

In 1623 the German Wilhelm Schickard produced the first known designs for a mechanical calculator. Twenty years later Blaise Pascal produced a machine of an improved design, aiming to help with the large amount of tedious arithmetic required in his role as a tax collector. The interest in mechanical calculation showed no sign of reducing in the subsequent centuries, as generations of people worldwide followed in Pascal and Wilhelm’s footsteps, subscribing to their view that offloading mental energy to a machine would be a relief.

My place to put things
@trenner you're very welcome! a lot of what we do is just remembering specific things and connecting them to other specific things in distant contexts...

@ireneista yes exactly! I’m always grateful when I find the proper name for something - it helps encapsulate the idea and place it properly in the pantheon of related topics.

I’m somewhat in awe of people who can define and name phenomena in a way that helps me and others reason about them. It’s a sign of really thorough reasoning.

@trenner well thank you! we try our best