today, in attempting to do a close-read of a piece of alleged research to assess how much role a human had in writing it and to what extent the arguments and sources support the key take-aways it suggest

we found a section which purports to describe the oversight that a human performed over an LLM in writing it

and that section has several major non-sequitors that have nothing to do with the subject matter

it's kind of a new low, asking the machine to write the section that describes how you supervised the machine, and then not even reading that section
it really worries us that people are going to take this sort of piece seriously, that real strategic decisions about activism are going to be based on it

we're not naming it because the point is not to have a conversation about a specific piece

it's to remind everyone to engage your brains when reading these things. don't be on autopilot, don't let your assessment of plausibility be based on how formal the writing is or anything like that

we also found that the top-level takeaway of this piece had very little to do with the arguments it advanced, it's just that it's long enough and formalistic enough that you really have to go over it slowly to realize there isn't actually a connection there
and it's painful to go over these things slowly because at every level, they fail to say anything, every list of five bullet points has two that seem vaguely on-topic and three that could make sense if the rest of the piece explains them in some way, and then only after reading the whole thing do you realize it doesn't

oh yeah by the way

these newfangled spam generators do tap into several important effects that haven't been a big problem in the past, such as the linguistic thing that the stochastic parrots paper does a great job of explaining (humans assess the credibility of written information by building a mental model of the person who write it, but here, that model is spoofed)

but, also...

old-fashioned automation bias is a very strong factor in why people take them seriously, too

wikipedia gives a description we like: "Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct."

it's well-studied, going back many decades. people just... assume the machine must have had some reason for saying that, and discount their own judgement.

@ireneista I always thought this was a variant of deferring to the expert - treating the automation as an implementation of their (superior) expert knowledge. But meaning that in contexts where I also have substantial knowledge I can check that against the automation and either catch an error or learn something, not a reason to discount my own judgement.

(Related to why I prefer to use and make automations that record how a conclusion was reached)

@ShadSterling we're certain there has been sociology research on the relationship between automation bias and deference to experts, but we're not really up on the literature, alas

@ShadSterling but yeah, fun fact, there are actually laws on the books stemming from the previous round of harmful use of computers in decision-making in the 1970s, which for example require such decisions to include a description of how the conclusion was reached

to our knowledge nobody has attempted to apply these laws to generative ML, which we would very much like to see happen because there's no way the ML can meet that bar

@ShadSterling these laws are the reason that so-called expert systems were invented (these days they're usually called decision trees)

because ie. in medical contexts people wanted to use the automation but they were required to be able to explain it, and using a tree structure means there's a clear rationale

@ShadSterling of course, everyone's been doing bare-minimum lip-service compliance, because those laws don't really go far enough and they've never really been, like... a thing with a lot of public support. alas.