Doubling down on my point from yesterday that we don't care enough about proper OOM management.

I just played around with a giant zram device just see if we can make way more use of compressed memory.

Turns out yes, suddenly my Firefox on Linux holds 250 tabs in memory without any disk-backed swap.

Kinda makes the point that having a fixed-size zram device is a bad idea, and the kernel should just compress as much memory as possible?

@verdre There's also zswap, which I think we should try. The downside is that it requires a swapfile or swap partition. The upside is that it's a dynamically sized pool of compressed pages, and it seems to actually be better suited for this task than zram

@AdrianVovk Yup, the concept behind zswap seems to be much more fitting for us.

Still, in my experiments, zswap causes my firefox to be killed way earlier than when using the giant zram (it should be much later instead, as now there's also real disk storage available to write to).

@AdrianVovk My wild guess as to why that's happening is that the kernel isn't smart enough. I'm testing with a 16 GB swapfile in addition to my 16 GB of RAM.

The kernel now fills up the 16 GB of uncompressed RAM, then starts swapping out, keeping track of each byte that it's swapping out. As soon as it swapped out 16 GB (even though zswap didn't write anything back to disk, I checked that), memory pressure rises and the OOM killer kills firefox, when actually there's more memory available.

@AdrianVovk The concept of exposing potentially infinite memory (in case compression ratio is infinite) as "fixed size" swapfiles just seems wrong to me.

On macOS I can keep writing zeros to memory and the kernel just transparently compresses it away.

@verdre Well macOS also exposes it as swapfiles. When it can no longer compress stuff it just spills over onto disk