"once-per-minute cron job inexplicably running exactly 38 times per minute" is definitely the weirdest cause of a bug I have seen this week
twist: I had correctly identified it as a "multiple copies of supposedly-unique thing running concurrently" issue, then saw logs that appeared to confirm that and latched onto them, but I forgot to filter the logs to just production. production was doing one per minute as it should, the other 37 are the task scheduler in various dev/testing environments firing off their own instances of the same task

it was actually just several copies of the production thing enqueued in separate minutes, then getting backed up in a queue before the workers reappeared and picked up all of them at once

tl;dr: complex async system debugging is hard and i should go back to bed

@emily Let they who have *not* tried to debug a prod issue from dev logs cast the first stone.

(Throwing literally all your logs into a single massive logging instance, typically via a SaaS thingy, definitely makes this sort of thing *easier*. But from my own embarrassing experience, it's still surprisingly possible in other contexts.)

@whbboyd ask me about the time I panicked and shut down a server because I saw a bunch of processes I didn't recognize as normal server stuff, and thought someone had hacked into it and installed a whole desktop for some reason

at which point my own local computer suddenly started shutting down