got hit by a wave of slop prs last week so i guess it’s time to buckle down and spend my morning…

writing an ai policy

this looks-legitimate-but-trash slop reminds me of corporate phishing tests

the biggest tragedy for me is that i’ve spent A LOT of time and energy to encourage contributions to my projects and now i have to spend TBD energy on gate-keeping

it’s very disheartening to try to walk the line between “if you need help, reach out” and “don’t waste our time with thoughtless slop”

@hynek the part I dislike the most is the pressure to include "instructions for agents" in contrib docs. I might do it (at the end, clearly delineated so that nobody has to read it) because it seems somewhat effective, but it feels inherently icky.

Also, I'm sure you've seen it, but the FastAPI policy seems closest to your vibe. Having done a lot of reading in this space, that one stands out for being short, clear, and friendly in tone.

@sirosen I'm moving ours into a separate file to not tone-poison the hopefully-welcoming contributing guide. and I honestly don't think with our guide there is a need for instructions for agents.

https://github.com/python-attrs/attrs/pull/1518

Add explicit AI policy by hynek · Pull Request #1518 · python-attrs/attrs

Due to recent events, I'm afraid this is necessary.

GitHub
@hynek @sirosen, "Do not post LLM-generated review comments unless you agree with them." You really need to remove the second part there. The problem with slop is not only that it's often bullshit, but also that it has very low signal-to-noise ratio.
@mgorny @sirosen unfortuantely you’re right I got hit by expansive LLM glazing after asking for feedback just yesterday _sigh_