New blog post: Demystifying multiple importance sampling

It is a simple thing that turns out to be confusing to a lot of us rendering engineers. I try to explain it in detail here with path-tracing examples!

https://lisyarus.github.io/blog/posts/multiple-importance-sampling.html

Demystifying multiple importance sampling

lisyarus blog
@lisyarus Thanks for the writeup! One thing I always struggle with (MIS, ReSTIR, etc.) is how do you compute the probability p(i)? Is it just a matter of looking it up for a given function? Or can it be derived given your function? For instance, for a uniform distribution, I assume p(i) is 1 because each sample has the same probability of being picked? Or is it 1/interval? Maybe I just need to brush up on probability theory :)
@theWarhelm Usually you explicitly compute it, yep! For a uniform distribution on a set with "size" A, the probability is 1/A (e.g. 1/(b-a) for an interval [a,b], or 1/(2*pi) for a hemisphere). For more complicated distributions like VNDF there's just a formula for the actual probability.
@lisyarus That finally makes sense, thanks!
@theWarhelm I'll add that sometimes the probability isn't actually known, and in this case you can't use the method for Monte-Carlo. E.g. in smth called "sampling importance resampling" they generate N samples with distribution P and select one of them with some weights to approximate some other distribution Q. When N=1, you get the original distribution P. When N -> infinity, you get Q. In the middle, you get some distribution with pretty much unknown probability.