I’m concerned stuff like this will lead to a “dark forest” scenario, in which the risks of open source outweigh the benefits. Not today, but it's not impossible tomorrow.

https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation

hackerbot-claw: An AI-Powered Bot Actively Exploiting GitHub Actions - Microsoft, DataDog, and CNCF Projects Hit So Far - StepSecurity

A week-long automated attack campaign targeted CI/CD pipelines across major open source repositories, achieving remote code execution in at least 4 out of 5 targets. The attacker, an autonomous bot called hackerbot-claw, used 5 different exploitation techniques and successfully exfiltrated a GitHub token with write permissions from one of the most popular repositories on GitHub. This post breaks down each attack, shows the evidence, and explains what you can do to protect your workflows.

At the very least, we're going to need a highly effective spam filter for code contributions
@mttaggart Given that reviewing a PR is already labor-intensive, is it that much marginal effort to ask contributors to hop on a call and verify their humanity? Maybe once they've done it once they can get a credential as a trusted contributor.
@anyone_can_whistle Okay now imagine having to make that request for 1000 PRs that just came in
@mttaggart I can imagine it being automated; maintainers have a calendly-type thing, so making the appointment is not work for the maintainer. If you want to submit a PR you get on the calendly. I don't really see the incentive for bots to spam a system that is going to human-verify them eventually. Maybe the problem would be that a bad human actor with a bunch of bots would make themself available for the verification.
@mttaggart But maybe part of the verification would be actual identity (is anonymity valued in software contributions?). Or maybe the ability to talk through the PR.