Sequoia has a bug bounty program and nearly all hunters use LLMs. If we were to decide that we would prohibit LLM submissions, we may as well close down the program. When interacting with hunters, I'm experimenting with saying: "Please keep your response to less than 200 words. Do not change the topic. Only consider the reported issue." Initial results are positive. The responses are still from an LLM, but they are shorter and seem more on-topic.
@nwalfield So… you're going to pay out to LLMs instead of supporting actual humans actually putting in the time to learn their craft? That does not sound like a good way to go about things. :F
@phryk I feel like I'm in a bind. I'm against LLMs and don't use them and don't want people who contribute to Sequoia to use them. That said, these hunters with their LLMs are finding issues (albeit most of them are inconsequential). Should I ignore their reports and then not fix the issues? What would you do in my situation?

@nwalfield @phryk

if sequoia has issues that can be found by llms, couldn't you get some free-for-open-source tokens and get the same reports for free?

you'd still have to interact with an llm, but at least you wouldn't have to pay some random person to act as a human-llm-proxy in github issues.

(and then ban 3rd party llm contributions to reduce the workload)

@nwalfield Yeah, I have to agree with @guenther here – if automated tools find issues that's fine – but you should probably just run those tools yourself – you'll probably be better at it since you have actual knowledge of infosec and the codebase.

If you allow LLMs on your bug bounty, you're not only paying people who don't put in the time to actually become competent with infosec, you're also systematically disadvantaging those who actually do, which in turn will weaken the infosec landscape.