reporter submits a hackerone report against #curl that includes "a crash in function NNN" with lots of complicated details.

With the little detail that function NNN was made up and does not exist in real code.

New input field added to Hackerone submissions for #curl
combined with new policy to instantly ban every reporter who submits issues we deem AI slop
@bagder Some punishment for not ticking that box while still using AI required. Just banning is not enough.
@bagder I saw a theory somewhere (maybe in one of these curl threads) that ai buh reports are actually a DDOS attack on maintainers
@davidr they certainly hamper our ability to do what we actually want to do, yes
@davidr @bagder I definitely consider it an attack vector - if only by way of attrition.
@bagder did you try AI to automatically detect AI generated reports?
@afink @bagder isn't this contre productive ?
@tuxta @bagder thats the joke. AI is always counterproductive..

@bagder

The obvious solution to AI slop is to have another AI that can detect AI slop!

But then they'll build an AI to get around the slop detecting AI

So you'll also need an AI that can detect the slop detecting defeating AI

Yes, this is all very reasonable

@bagder Please tell me that you filter out every report that has this ticked ;)
@bagder Does it make sense to make it a "don't know/no/yes" dropdown, where it would require an explicit action to mark it as "no"?
@forst @bagder I use this method to weed out bots from joining a facebook group. It is 3 choices. 2 of them say "I am a bot" and one says "I am human". It is surprising that there are quite some people who click on bot. Well it keeps bot and stupid people out.
@bagder you could also ask AI some random math question as LLM cannot reson so they can't give the answer.
curl disclosed on HackerOne: HTTP/3 Stream Dependency Cycle Exploit

**Penetration Testing Report: HTTP/3 Stream Dependency Cycle Exploit** --- # **0x00 Overview** A novel exploit leveraging stream dependency cycles in the HTTP/3 protocol stack was discovered, resulting in memory corruption and potential denial-of-service or remote code execution scenarios when used against HTTP/3-capable clients such as `curl` (tested on version 8.13.0). This report details...

HackerOne
@bagder what the heck did I just read
@bagder AI generated garbage ?
Hai: Your HackerOne AI Security Agent

Accelerate your find-to-fix cycles. Transform your vulnerability response strategy with AI-powered efficiency

HackerOne

@bagder Ha, could not find it with my phone but wasn‘t sure enough.

Sending an invoice to the guy?

@bagder I believe that is known as a -1 day exploit
@bagder is it "vibe reporting" a new trend
@bagder I still don’t get why they do this. Are they really thinking that the AI they use found something? 🥴
@fubaroque there’s money on the line when you submit a valid report through a bug bounty program, and the cost to these people of asking AI to generate a report with a 0.1% chance of being right is $0.
@0xabad1dea @fubaroque Logically it would be good to make the cost to the submitter higher than the probable winnings (probability*bounty) of gaming the system. Then all but irrational or innumerate (or malicious) ones will go away.
@DamonHD @fubaroque imposing costs to the reporter on reporting real problems is how you never get a real problem reported again. bug bounties exist to incentivize doing the time-consuming work to put together the report on the problem.
@DamonHD @fubaroque for the record, what that project is trying now is adding a checkbox to the form asking, in a polite and nonjudgmental way, if AI was used in constructing the report; and formalizing that they will now completely ban anyone who submits clearly nonsense slop reports from submitting again (as distinct from reports where AI may have been used to translate, format, etc). I suppose you could call that ban risk "imposing a cost" though taking your shot and getting banned from that one specific project is still basically zero cost.

@0xabad1dea @fubaroque I should have said more clearly "The cost to a bad-actor submitter".

But these things are hard, particularly if you wish to support and be nice to the good actors.

I have essentially never allowed user comments on any of my sites since the late 90s because SPAMming is too easy to attempt. (And I also have received ~10,000 SPAM email attempts per day for all that time.)

@0xabad1dea @fubaroque i think you're greatly overestimating the chances here. 0.001% would be more realistic, security research isn't THIS easy
@domi @fubaroque thank you for letting me know that my arbitrary nonliteral number to demonstrate a general principle was arbitrary and nonliteral
@0xabad1dea @fubaroque np! use a linguistic description to avoid someone understanding it as literal next time /npa /gen

@domi let me spell this out as clearly, directly and literally as possible: You are being annoying. Sending useless, content-free objections to people you don't know, in reply to messages that weren't addressed to you, being told that it was annoying and then being like "you're welcome!" about it is a good way to get categorized as a troll and blocked.

I am sending this reply instead of blocking you because I think you might just sincerely be this non-practiced at communicating with other humans, and not malicious.

@bagder I need to write up a pull request that adds a securty hole and then a report of that hole...

Now if I could just figure out how ot make money doing that.

@bagder sometimes I feel like some reports I triage in my work’s bug bounty program are AI generated… this just leads me to believe that’s not just a feeling 🫣

@bagder do you have a link?

EDIT: now I see it, please don't mind me