Artis3n

@artis3n@infosec.exchange
27 Followers
64 Following
417 Posts

Just heard AOC say:

"If there are no more good people in the world, then I want to be the last one."

I think I'll go to sleep for the night on that one.

New blog/whitepaper release:

Shostack + Associates is pleased to release our latest whitepaper, Understanding the Four Question Framework for Threat Modeling! It’s free as part of our Black Friday sale, and uhhh, because we like sharing knowledge it’ll remain free.

I wrote this paper because someone once called the questions “surprisingly nuanced,” which I thought was kind, and because I saw even collaborators varying the words. And as I write in the introduction:

People commonly make the mistake of rephrasing the questions. They don’t realize that there are reasons to use the specific framework questions. There’s nuance and intent in the questions, which are meant to be answerable in many ways. Rephrasings often lose nuance, flexibility, or both. Further, consistency in how we say things contributes to consistency in how we do them.

If this isn’t more fun than listening to your Uncle Jack expound on football on Thanksgiving, double your money back!

https://shostack.org/whitepapers/?utm_source=mastodon&utm_medium=posts&utm_campaign=four-question-whitepaper&utm_id=4qframe

Threat Modeling Whitepapers from Shostack + Associates

"On behalf of the WordPress security team, ..." and then many mentions of "fixing a security issue" without specifying what it is. (The patch is, presumably, public since the plugin is OSS and PHP?)

https://wordpress.org/news/2024/10/secure-custom-fields/

I don't have an opinion on the broader Wordpress situation, but seeing a security exception used to wield power in a broader controversy is extremely worrying.

Open source communities trust security teams with exceptional powers, and weakening that trust damages everyone.

Secure Custom Fields – WordPress News

Cars are increasingly surveillance systems on wheels. They spy relentlessly not just on the driver --why are people comfortable with this, or do they not know it's happening? -- but also on the surroundings. Tesla is the worst offender as it keeps trying, unsuccessfully, to do self-driving.

If you own one is these, you are helping make the surveillance state much more pervasive.

Congress and state lawmakers obviously are in favor of all this spying, because they do nothing to stop it.

Microsoft will try the data-scraping Windows Recall feature again in October

Initial Recall preview was lambasted for obvious privacy and security failures.

https://arstechnica.com/gadgets/2024/08/microsoft-will-try-the-data-scraping-windows-recall-feature-again-in-october/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

Microsoft will try the data-scraping Windows Recall feature again in October

Initial Recall preview was lambasted for obvious privacy and security failures.

Ars Technica
Expecting data brokers to care about securing the data they so casually collect, buy, collate and otherwise acquire is pointless. None of them really do, and almost every breach involving a data broker shows this. By definition, their businesses largely rely on collecting records that they already view as public and that this entitles them to collect, resell, etc said data. If that is the fundamental organizing idea of your business model, how much are you going to care about protecting it from mass theft?
Our driving habits are sold by car companies to data brokers for pennies, ranging from a mere 26 cents per car up to 61 cents per car. https://www.eff.org/deeplinks/2024/07/senators-expose-car-companies-terrible-data-privacy-practices
Senators Expose Car Companies’ Terrible Data Privacy Practices

In a letter to the Federal Trade Commission (FTC) last week, Senators Ron Wyden and Edward Markey urged the FTC to investigate several car companies caught selling and sharing customer information without clear consent. Alongside details previously gathered from reporting by The New York Times, the...

Electronic Frontier Foundation
"And then we pushed an update that triggered the consequences of our prior fuckup in failing to bounds check, failing to lint configurations, failing to understand that a config file could be corrupted or wrong and providing an error handling mechanism, and failing to actually test our shit"

A new report finds Boeing’s rockets are built with an unqualified work force

NASA declines to penalize Boeing for the deficiencies.

https://arstechnica.com/space/2024/08/a-new-report-finds-boeings-rockets-are-built-with-an-unqualified-work-force/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

A new report finds Boeing’s rockets are built with an unqualified work force

NASA declines to penalize Boeing for the deficiencies.

Ars Technica
×
yeah you know there are languages where this problem just doesn't happen?

The phrase "input pointer array" appears in the next para, which means "we are doing silly shit with C++ because we're leet yo"

Languages that don't make you do your own fucking pointer math exist for a fucking reason.

Their 'mitigation' here is to bother to check that they're still in allocated memory, something which is only a problem by their choice.

Oh boy, -test coverage-

So they talk about how their test cases weren't broad enough in the next para, and they promise swearsie-realsie that they'll put in test scenarios that "better reflect production usage"

Buuuut I don't see one -really fucking obvious standout test case- that, given the context above, really the fuck ought to be separated out:

They say nothing about whether they're gonna test the -failure- of the sensor.

If you ain't testing with invalid inputs and other abuses to bound the behavior of your binary, then you're not testing its full envelope of behavior and you cannot assert anything meaningful about its suitability for production.

Car manufacturers do crash tests to make sure you don't fucking impale your face on the steering column; this is the exact same fucking principle.

There's a -lot- of fascinating subtlety and discussion to be had around testing generally,

but this is kindergarten level horseshit. Maybe when they stop eating the crayons we can talk about the more interesting bits.

"a"?

So there's a logic error here alright but it sure the fuck ain't with their agent's parsing, which....this is repeating items 1 and 2, but from a different level of abstraction.

This is turd-polishing.

More to the point:

Why the everliving fuck are you hard-coding a specific number of channels into your fucking agent,

when 'channels' are a tagging convention and have no pertinence to the detection logic,

and you could just -fucking allocate the resources to hold the content based on the configuration itself-

You -utter- -assholes-

You are -creating a problem for yourself- and then -doubling down on doing it wrong-

Anyway seeing as this "finding" is a dupe of 1 and 2 combined, the 'mitigations' are the same horseshit; this is clearly here to pad out the numbers and has no actual merit.
I wonder if they had an "ai" write it and then made an intern take out the Nigerianisms.
this is a dupe of 3.

.......

Mister Holmes, sir, we have a -mystery- on our hands!

Why, just this morning the lad Simpkins came into Scotland Yard with the most astonishing tale and -

Mister Holmes, the mudlarks are in an absolute uproar, you must have heard from the Irregulars -

London's entire sewer system has been -scoured utterly bare-

There is -no shit-, Sherlock!

yeah they don't even try to dress this one up

Problem is, they completely fail to talk at all about staged deployment for -any other part of the product- so uh.

Also as one of their mitigations they're deigning to allow customers whether to accept the new content.

You know, the -base expectation- from -literally everyone else-

But only about this 'channel' content. Not anything with the actual definitions or the agent binary itself; none of that is mentioned at all.

Completely the fuck missing.

So, see, what this -looks- like they're saying is that they've got third parties in to review the code and process.

But those are two separate clauses.

They have two third-parties in to review the -sensor code-

-and-

They are conducting a review of process.

But they are not actually -saying- that the third parties are involved in the process review at all - only the code review.

Perhaps someone ought to ask them to clear that the fuck up.

It's that sticky "we" there, y'see?

"We" -could- be implied to mean the set of crowdstrike, vendor 1, and vendor 2.

But "we" can also refer to crowdstrike the company, or to the personnel of that company.

"We" is one of those words that has -very- tricky scope to it, and can be used to lie to you right to your face.

This whole technical details section is exec-pandering crap.

-this- little fucker is funny tho, 'cuz it implies that if you have an input that cannot be parsed with regular expressions, clownstrike can't handle it.

The next part appears to be an extract from some guy at MS's blog about this shit -

https://www.microsoft.com/en-us/security/blog/2024/07/27/windows-security-best-practices-for-integrating-and-managing-security-tools/

whiiiich pads out the last half of the document and since it isn't clownstrike's work, but just shit they lifted from someone else's blog, doesn't matter

Windows Security best practices for integrating and managing security tools | Microsoft Security Blog

We examine the recent CrowdStrike outage and provide a technical overview of the root cause.

Microsoft Security Blog

So yeah, only the first six pages have any content on them; two of the findings are duplicates and are just there to pad for length; they -missed- a bunch of other findings; they are committed to a known-broken sensor operations regime and have no clear plans to fix the underlying architectural issues exposed -by- this; and they don't have anyone left in the place who can fucking write worth a damn.

Complete fucking clownshoes. If I were their customer I would be calling for their literal, heart-ripped-from-chest, blood for this.

Also:

Who the everliving fuck -audited- this pile of shit?

Who signed off that -this- was suitable for deployment to federal computers?

Who the fuck did their audit and why the fuck did they not catch -any- of this?

That "compile time" thing earlier is still bugging the shit out of me, especially because the para following doesn't talk about compilation at all.
@munin I have audited things knowing full well that they’re just gonna accept-risk as-designed bla-bla-bla all of it and then tell people they had it audited
@munin I mean, just to be clear, C++ hasn't made you do your own pointer math for like 13 years, or more if one is competent. My point here is they'd find ways to fuck this up in any language. Because they fired hundreads to replace them with AI, fundamental process of incompetence.

@masonbially @munin They've added a lot of new abstractions on top of pointer math, sure. But they haven't meaningfully reduced the risk.

Some examples: std::span::operator[] does UB instead of bounds checking. Same for std::string_view. And both the iterator-based std::copy as well as std::ranges::copy do UB if the output buffer is too small. This new stuff may look nicer but it's exactly as dangerous as pointer math.

@muvlon @masonbially

hey so the pointer math is not the actual issue here; the actual issue is that they made an architectural choice to make the execution of their binary dependent on a fixed integer value hardcoded into the binary, instead of loading options in a way that did not introduce the possibility -of- desynching.

It's a language-independent fundamental architectural situation, showing that they are not coding this to professional standards.

@munin @masonbially There's many layers of fuckup here, as you've detailed very well. But I do think one way they could've fucked up less was using a bounds-checked access and recovering from the error as opposed to yolo-ing it with C++ and getting an unrecoverable BSOD.