Cold diffusion: the basic idea of diffusion works without all the specifics.

You can replace the Gaussian noise by any other noise, or even noiseless operations, like masking or overlaying a picture of a random animal.

https://arxiv.org/abs/2208.09392

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

Standard diffusion models involve an image transform -- adding Gaussian noise -- and an image restoration operator that inverts this degradation. We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice. Even when using completely deterministic degradations (e.g., blur, masking, and more), the training and test-time update rules that underlie diffusion models can be easily generalized to create generative models. The success of these fully deterministic models calls into question the community's understanding of diffusion models, which relies on noise in either gradient Langevin dynamics or variational inference, and paves the way for generalized diffusion models that invert arbitrary processes. Our code is available at https://github.com/arpitbansal297/Cold-Diffusion-Models

arXiv.org

@pbloem It's a lovely paper! A take I've seen (and enjoy) is that perhaps the name "step-by-step-dedegradation" is a better name than "diffusion models."

I still really enjoy that there are so many insights of other fields that the usual "uncorrelated Gaussian pixel-wise noise" has connections with so many other areas of Maths & physics that allows one to, say, have a good starting guess of hyperparameters (the noise schedules!) but also you can forget all of that and it just... still works :)

@pbloem Also, I'd love to hear more from the experts what their takes are on this paper. I think it will be talked about soonish in the Eleuther AI Diffusion Reading Group (https://github.com/tmabraham/diffusion_reading_group) .
GitHub - tmabraham/diffusion_reading_group: Diffusion Reading Group at EleutherAI

Diffusion Reading Group at EleutherAI. Contribute to tmabraham/diffusion_reading_group development by creating an account on GitHub.

GitHub

@cr I didn't know about that reading group. I'll give it a try.

I think this kind of thing is a bit like finding out you can throw away a lot of code. It'll make your program better and simpler, but it still feels bittersweet.

I expect that in the long run it'll lead to a stronger, and (even) more general foundation of Diffusion.