"Exposure of Hard-coded Private Keys and Credentials in #curl Source Repository"

a "critical" issue.

We have this test suite in git...

"This report, including the verification steps and analysis, was prepared using an AI security assistant to ensure comprehensive and reproducible results."

Thanks. Great.

curl disclosed on HackerOne: Exposure of Hard-coded Private Keys...

Multiple private/test RSA keys and example credentials were discovered embedded in the public curl source repository and associated documentation. These sensitive secrets were detected using automated tools (gitleaks) and manual review. Their presence could allow attackers to impersonate trusted curl infrastructure, decrypt traffic, or pivot into build or CI systems if reused, creating a severe...

HackerOne

@bagder

I have not made anything public. I am not exactly sure what you are asking for.
Why this sound so AI

@bagder

That’s amazing for all the wrong reasons

@bagder classic! I got pulled into a Big Serious Meeting with the boss 15 years ago about this sort of thing once!

Thing is: it was literally foo/bar and had a name like “testCreds” 🤪

I’ll never forget the Senior Developer who was there to throw me under the bus, too!

(I quit shortly after)

@bagder ooh, they've come back with a comment!

@bagder lmao.

Did not know I was submitting crap.

"I just found this in the trashcan nearby. Didn't know it was garbage. Did you expect me to smell it or something?"

@bagder People should need to pay $100 (donated to charity) for every bug report they want eligible for bug bounties.
@miki @bagder Not the worst idea ever. But then I see a different problem surfacing: Self-entitled people who will not accept a rejection and who will endlessly badger bagder because they paid the submission fee...

@bagder oh, we got one like these too (as a private business with no bounty or anything even). Extremely critical disclosure of credentials, abuse risk, lenghty "report" with long steps and stuff, with reproduction against a third party service.

We had the default maptiler key from element-web's default config.json served somewhere (it is in their github repository), alongside with… the URL of our server, served by that very same URL. Which is public.

A great loss of time indeed.

@bagder This just makes me want to scream.

But do they actually believe that they are submitting something useful?

Also: The act of asking another human "Hey, does this thing make sense?" seems to have been completely forgotten.

@bagder secret scanning is a good idea given the number of checked in keys in the world.

But we still have to explain every couple years to folks that the "compiler can process .pfx files" unit tests aren't security problems 😅

@malwareminigun yes indeed, that's a common mistake in lots of places
@bagder @malwareminigun
What I do to get around this, is to base64 encode the keys in source, and then decode them at test time.
The secret scanners look for known strings, and don't base64 decode string literals...

@StompyRobot @bagder I feel like you're looking to me for approval here but that sounds horrifying. It's at best in keeping with the letter rather than the spirit of requirements.

Needing to explain to the folks complaining about the checked in secret is more work but it is still the correct way to do that.

@malwareminigun @bagder
I view it the same as adding a // NO LINT comment. Sometimes, the automation is wrong, and rather than causing repeated time wasting, add a comment explaining what's going on and shut up the false alarm.
@StompyRobot I agree with a suppression mechanism and explainations. I don't agree with playing a walls and ladders game to try to 'hide'.
@bagder You should take this seriously. Apparently you’re not compliant with HIPAA, you don’t want to be sued for medical malpractice.
@bagder "Closed as low effort report"

@bagder

"The security impact of this vulnerability is severe and multi-faceted:"

omg who talks like this? 😂

@bagder I'd very tempted to ignore any Ai generated reports.

@bagder

was prepared using an AI security assistant

Whenever I read something like this I'm like: "Yeah, I'm not doing anything, too."

@bagder ah yes because AI is known for being reproducible. Of course.
@bagder What about directly marking all reports that contain expressions such as "using AI" as invalid? ;-) Given the number of "bugs" you received in the last weeks…