Was thinking again a while ago what waste PBR textures can be under most lighting.

Kind of weird to do a 4x texture memory increase - assuming BC1-5 and no alpha/metalness, e.g. BC1 base color + BC5 normal map & BC4 roughness - that will only show up under specific lighting conditions and elsewhere appears flat.

Though doubling texture resolution in both dimensions is also a 4x increase that might never show up (esp. with upscaling) so all things considered, maybe 4x isn't that bad.

#gamedev

@archo Yeah but at sensible resolutions, the higher-rez textures will never be loaded. So all they're wasting is disk space and their own production time. Whereas PBR is burning my precious DRAM for minor LSB differences. Boooooo.

(I say this with honest love to all my PBR shader writers)

(it's a joke. This is a bit)

(or is it)

@TomF Texture loading time from disk and download time/bandwidth would be wasted as well, which seems relevant with modern 150 GB games. (Also not a lot of such games fit on 1 TB consoles.)

But I suspect for high res textures there could be a way to stream the biggest mips on-demand from the CDN to the GPU based on what the GPU needs which AFAIK nobody's doing yet. PBR seems uniquely disadvantaged in this regard (but at least the number of parameters doesn't seem to be growing infinitely).

@archo I keep waiting for @rygorous to come up with a compression scheme for PBR textures. Gotta be a lot of zeros in that sparse matrix, right?
@TomF @archo Wronski et al. have already done it

@rygorous @TomF IIRC those methods mainly dealt with correlated color/normal/roughness. Though I've seen a post from 2020 that deals with tile repetition as well.

Many emissive/metalness maps I've seen could also benefit from tile deduplication in VRAM (lots of solid black/white space).

Geometry data (normal, AO, curvature) seems extremely difficult to compress though, despite looking very regular visually.

@archo @rygorous @TomF a lot of your typical AAA content is forming the final surface parameters “on-the-fly” by combining lots of tilers together in the shader. Although in some cases they will indeed ship “flattened” versions of that were produced in Substance or a proprietary tool, it depends on the content. But historically shipping tiling maps has been an effective “compression” scheme, at the cost of performance and shader complexity. Or you even have VT systems that “flatten” at runtime.

@mjp @archo @rygorous Absolutely. But you can do that flatten/baking step based on predictive heuristics as well, and they work just fine.

Now if you're arguing that you could use sampler feedback for the SOURCES of that baking - since you don't really care about the latency there - absolutely! Although I suspect the heuristics work Just Fine Enough even there.