today, in attempting to do a close-read of a piece of alleged research to assess how much role a human had in writing it and to what extent the arguments and sources support the key take-aways it suggest

we found a section which purports to describe the oversight that a human performed over an LLM in writing it

and that section has several major non-sequitors that have nothing to do with the subject matter

it's kind of a new low, asking the machine to write the section that describes how you supervised the machine, and then not even reading that section
it really worries us that people are going to take this sort of piece seriously, that real strategic decisions about activism are going to be based on it

we're not naming it because the point is not to have a conversation about a specific piece

it's to remind everyone to engage your brains when reading these things. don't be on autopilot, don't let your assessment of plausibility be based on how formal the writing is or anything like that

we also found that the top-level takeaway of this piece had very little to do with the arguments it advanced, it's just that it's long enough and formalistic enough that you really have to go over it slowly to realize there isn't actually a connection there
and it's painful to go over these things slowly because at every level, they fail to say anything, every list of five bullet points has two that seem vaguely on-topic and three that could make sense if the rest of the piece explains them in some way, and then only after reading the whole thing do you realize it doesn't

oh yeah by the way

these newfangled spam generators do tap into several important effects that haven't been a big problem in the past, such as the linguistic thing that the stochastic parrots paper does a great job of explaining (humans assess the credibility of written information by building a mental model of the person who write it, but here, that model is spoofed)

but, also...

old-fashioned automation bias is a very strong factor in why people take them seriously, too

wikipedia gives a description we like: "Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."

it's well-studied, going back many decades. people just... assume the machine must have had some reason for saying that, and discount their own judgement.

anyway, the point at which you're basing a major activism decision, with serious consequences for the world, on something that came out of one of these machines

is the point at which you should be like hey, wait, this needs extremely close study or else it's worse than useless

strategic decisions should be made while emotionally centered, and as grounded as possible in an assessment of what you do and don't know

get into that headspace before making them. that, really, is the core thing that we personally make sure to do, and it's important for so many reasons that go far beyond just spam generators

@ireneista and if you have to do that ahead of when you need to act on it - "if X then I do Y" - then better that than winging it in response to things you can see're going to mess with your head, yeah
@flippac exactly