The GPU Zen 3 book is out, perfect timing for the Holiday Season! https://www.amazon.com/GPU-Zen-Advanced-Rendering-Techniques/dp/B0DNXNM14K
So many fantastic-looking articles!

Together with my teammates, we have also contributed a chapter: "Differentiable Graphics with Slang.D for Appearance-Based Optimization".

Amazon.com

Data-driven techniques like numerical optimization, stochastic gradient descent, and differentiable programming are taking over computer science.
Real-time graphics were relatively slow to adopt, partially because of tools. HLSL is not PyTorch, and you don't want to rewrite all your BRDFs in Python.
Fortunately, you don't have to! With Slang, you can differentiate your existing shader code. And it's getting a lot of attention, including recent Khronos adoption: https://www.khronos.org/news/press/khronos-group-launches-slang-initiative-hosting-open-source-compiler-contributed-by-nvidia
The ability to compute a gradient of a shader function is not enough to make differentiable programming easy. SGD works great in over-parametrized settings, such as neural networks, but it requires a lot of know-how on avoiding local minima, exploding gradients, and dealing with non-convex problems.
Khronos Group Launches Slang Initiative, Hosting Open Source Compiler Contributed by NVIDIA

The Khronos Group has announced the launch of the new Slangℒ…

The Khronos Group
This motivated our article, where we explain SGD and optimization from grounds-up, show some possible pitfalls, how to deal with them in practice, and how Slang can help make your existing shader code work in data-driven pipelines.
To make it relatable and practical, we show three non-neural-network applications.
One is automatic computing Jacobians of variable transformations common in Monte Carlo integrals.
One is a surprisingly superfast BC compressor that uses SGD.
Finally, the last application replaces analytical material texture mipmap generation (such as LEAN or Toksvig) with a data-driven approach.
The data-driven approach can work for any BRDF, does not require lossy approximations, and can model spatial effects and relationships. Approaches like Toksvig specular AA only modify roughness. By comparison, we automatically compensate for the loss of sharpness and modify diffuse and normal maps!

We provide the source code (in various languages - with Slang, you can stay in shaders, engine's C++ code, or use the Python ecosystem)!

We hope you find our article helpful and inspiring to explore the data-driven computer graphics future.:)
So go and get the book! https://www.amazon.com/GPU-Zen-Advanced-Rendering-Techniques/dp/B0DNXNM14K

Amazon.com

@BartWronski why does the slang optimized one in the middle look noisier than the reference image? There's all these hot spots on the brick that aren't present the reference or the naive version.
@aeva it was the closest match (under mean squared error metric) to the reference given the lower resolution of mipmap. If this is undesirable result, one can pick a different error metric or add some constraints (for example, "it should never produce stronger image gradients to prevent noisiness").
The cool thing is that those constraints can be dynamically changed and, for instance, exposed as "sliders" for artistic control and defining desirable properties, those can vary from asset to asset.
@aeva Also noteβ€”this is clearly not a production-ready technique! :)
But a "minimal" demo and an example of the data-driven approach and how, unlike typical mipmap generation + specular AA, it "automatically" can do crazy things like sharpening. When I worked on Witcher 2 or Far Cry 4, artists often manually sharpened the mipmaps, and my programmer colleagues considered it "an ugly hack". I find it fascinating they intuitively compensated for the sharpness loss, and it has a lower "error".

@BartWronski It's a very compelling demo. I'm definitely interested to learn more about at least the basics of what is going on here. I don't understand what's going on with the differentiable programming stuff at all.

So am I following this right, this isn't just an ai thing then?

@BartWronski The materials I've read or at least tried to read about slang so far have just left me confused on what it has to offer that I can't just do with a normal shader language. I've been trying to find a brief explanation that relies neither on an interest in neural nets or a solid understanding of calculus to follow, but I haven't found anything yet.
@BartWronski or like, an example of how it transforms the code that isn't just like oh it simply gives you the wozzlebozzle of your function. You know, like what the actual code transforms are in terms of resulting structure and intrinsics etc.
@aeva Differentiable Slang is fairly recent.
I think its original selling point was cross-compilation to numerous other shading languages (or CUDA or even C++ :) for debugging), most platforms (except consoles for obvious reasons - no public SDKs for PSSL and similar), and language features and modularity closer to Rust (interfaces and generics) than C++.
@aeva no, there are zero neural networks involved, zero training datasets, scraping, none of that stuff. :)
It's mathematical numerical optimization - you define a parameter set (like mipmap textures of a material) and the "loss function" ("I want those texels after going through my renderer be as close to mip 0 as possible under some error metric") and it tries to find the best parameters that are best fit under the target loss function.
@aeva I think the big tech push for crappy gen AI slop has turned a lot of people away from really cool intermediate space, where you can use similar techniques that are used to train neural networks to fit some parameters to some data. Opportunities for compression, procedural content generation ("find parameters of my procedural generator that go close to this reference image, and I will edit them later"), simplification, and automation with the user in the loop.
@BartWronski big tech is definitely working around the clock to poison the well and salt the earth as fast as humanly possible, but academics pushing crappy ai slop before the bubble was what soured me to it first. You know between the neural nets being used for really ghastly stuff like predictive policing, redlining, surveillance; and the comparatively minor annoyances like ruining siggraph and making researching stuff like SDFs a slog to filter out all the ai chaff.
@BartWronski somehow despite all that and the drag net scraping training sets and the gray goo apocalypse and the job postings looking for under paid programmers to audi ai generated code and every time my shit head brother in law makes fun of me and my sister for wanting to make games instead of waiting for the ai researchers to put an end to my career and every crappy real time style transfer demo, I still would rather not throw the baby out with the bath water & at least learn how this works
@aeva I am cynical-optimistic about this crap - is there even a market for it? Who is the target user? What is the use case? :)
People played with DALL-E or Midjourney, but other than using it for cheap-looking YouTube thumbnails or blog post images (they would not pay an artist for it anyway), I don't see it taking off. I know Netflix et al. would like to cut costs and fire as many people as possible, but there is already an abundance of their "content", it's already cheap and too much of it.
@aeva There always will be a demand for art from real artists, we want unique vision, not something generic. I don't have enough time to listen to all interesting music, so pick based on the artist, their personality,and most importantly culture and community it belongs to. "Generate me a movie with a button click" has zero community, why would I watch it? I won't be able to discuss it with my friends, what's the point? VCs will soon realize their products have no application or revenue streams.
@BartWronski @aeva agreed Bart. It's hard to find people interested in that from the ML side too, because it's so different from all the other stuff being worked on.