Versatile Diffusion: A diffusion model which is trained with reconstruction objectives of image and text together. It can go text-to-image, image-to-image, image->text->image, and so on.

A ๐Ÿงถ

Paper: https://arxiv.org/abs/2211.08332

Day 13 #30daysofDiffusion #MachineLearning

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

Recent advances in diffusion models have set an impressive milestone in many generation tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model. The pipeline design of VD instantiates a unified multi-flow diffusion framework, consisting of sharable and swappable layer modules that enable the crossmodal generality beyond images and text. Through extensive experiments, we demonstrate that VD successfully achieves the following: a) VD outperforms the baseline approaches and handles all its base tasks with competitive quality; b) VD enables novel extensions such as disentanglement of style and semantics, dual- and multi-context blending, etc.; c) The success of our multi-flow multimodal framework over images and text may inspire further diffusion-based universal AI research. Our code and models are open-sourced at https://github.com/SHI-Labs/Versatile-Diffusion.

arXiv.org
Very engineering-heavy paper with good qualitative results. I might have missed this, but I did not see any quantitative results for the model.
The following image essentially shows the flow of the model. They use the same VAE model used in LDM paper for image encoding-decoding. They use the Optimus (BERT for encoding and GPT-2 for decoding) model for text encoding-decoding. They also use CLIP text and image encoders.
They propose "FCResBlock" for the text stream of the architecture. They did not discuss why they proposed this though. Maybe it's inspired by language model literature? Reply if you have any intuition. ๐Ÿ˜…
They trained on LAION-2B, with some cleaning. 3 variants are trained: VD-basic is an image-variation model with a single-flow, VD-DC is a two-flow model that supports text-to-image and image-to-image, VD-official is a four-flow model, T2I, I2T, T2T & I2I.
The image flow weights are initialized with SD v1.4 checkpoint. I did not find info on how other params are initialized though.
Results are slightly better than the corresponding baselines I guess. See images for comparisons on various subtasks.
Style and content disentanglement results are interesting. They do this by doing PCA on clip image embedding of the guidance image.
The results on guidance with the "both image and text" task are slightly better than SD I guess.
Some results for the Image-to-Text-to-Image task. They don't maintain the subject... Not sure what's the use case for this... ๐Ÿค”