Regarding https://mastodon.gamedev.place/@gob/112450371375832518, the best system for shader code I've used was at Media Molecule on Dreams:

- Any C++ file could embed a PSSL shader with macro begin/end, gets automatically compiled as part of the build process.
- PSSL and C++ shared headers of structs/defines/enums so always in sync.
- Lots of systems supported compiling as both C++ and PSSL (e.g. the whole maths lib, which used clang ext_vector for shader-like syntax/swizzles in C++ too)

...

Hugo Devillers (@[email protected])

All shading languages suck. Moreover, they’re an outdated concept and they’re failing us. https://xol.io/blah/death-to-shading-languages/

Gamedev Mastodon

- Also *no bindings*, all shader resources were passed using shared structs, that directly embedded buffer/texture descriptors or pointers to other structs. Easy to describe and easy to change.
- And had great hot reload support for iteration/debugging, press a key in VS to recompile the shaders in that file and instantly replace in live game.

...

Anyhow, best system I've used. Super easy to add new shaders, share code between CPU and GPU, and iterate live.

Likely only practical due to having fixed platforms (PS4/Pro) and compilers (clang and psslc), and shaders being code not content, but is the benchmark I judge shader pipelines against. :)

@sjb3d TBH though - if you don't need to support a ton of shaders and a ton of drawcalls (codegen, shader specialization) then life is easy... you can even write some reflection and dynamic binding system as perf won't kill you. That's how EA's "Ignite" (aka RNA) worked (sport games). I imagine that Dreams was mostly a few shaders that hammered the GPU 100% without having to deal with pesky drawcalls etc. The hard problem is to get great iteration times when you have 100k shaders per "level"