An earnest question: why haven't operating systems (or their filesystems) added fast paths for bulk directory+content creation? That is, for untar/unzip?

As I understand it the bulk creation of a directory structure and underlying content involves a lot of ~expensive roundtrips from userspace into the kernel – akin to OpenGL's original immediate mode.

I suppose no one is trying to render directory structures to disk inside a 60fps frame budget, but...

I've only come up with two possibilities:

One, it might be that language package managers occupy a "sweet spot" here; compared to most archives there's far more structure and discrete content to rehydrate, and software developers are unarchiving more of these archives more frequently than most other users?

Two, gains from moving bulk unarchiving over the kernel threshold might be hard to realize since the kernel is itself working against a pluggable fs interface?

@isntitvacant package managers are a pretty good stress test application, but lots of other applications use similar data intensive operations. maybe a better question than "why haven't kernels changed" is "why haven't software development workflows moved to non-fs archives" and i mean this as a generative line of inquiry, not a rhetorical gotcha. some have! and there are good reasons to want to use the fs as a common interface for pluggable tools. and considerable switching cost, etc.

@isntitvacant

“suppose no one is trying to render directory structures to disk inside a 60fps frame budget”

You don’t know my life.