I don’t really see the problem here, if it doesn’t divert too much resources and it actually leads to bugs and exploits being fixed, why should it be a bad thing? With all the slop AI is being used for this seems like a rather good usecase.
please look at curl project and barrage of false security reports made. it got so bad because they do a paid bug bounty, and people would ask some llm to find bugs, llms would conjure something out of thin air, and people will post that. mozilla will now have to handle this junk.

Curl is getting a slew of amatuer programmers throwing non-tuned AI at the project and just saying “go find problems” then throwing it as pull requests at curl when the pull creators have no ability to understand what the AI found or the code it generated. Curl never asked for it, and they aren’t self identifying as AI generated.

In contrast, Mozilla is actively working with Anthropic on this, which implies at least some amount of coordination and intent with this. That would mean professionals from Anthropic and Mozilla fine tuning these AIs to reduce false positives. They will also be clearly labeled as AI generated. If it results in needless busywork, they’re free to cut the agreement at any time.

I’m not a particular fan of this either, and I think that there’s plenty of ground to cover with less resource intensive pattern matching bug and error detection schemes that should be focused on first, but this is absolutely not the same situation that happened to curl.

i mostly agree with you, but i just do not trust closed ai labs. sorry. and yes most western ai labs are closed (meta and google used to release open stuff, even microsoft, but they do not do it anymore). if it were some open lab, mozilla could set up their own inference with specialised tiny - mid scale models. with closed labs, i do not expect them to give them access to models to run.