Today is my last day at @LambdaAPI 😢

I've had a great time at Lambda, worked on cutting edge DL projects, used amazing hardware in workstations/laptops/in the cloud, and most of all worked with a fantastic bunch of people!

Here's a thread of some of the public things I did:

Trained and shared a WikiArt model using StyleGAN3:

---
RT @Buntworthy
Following in the illustrious footsteps of @pbaylies I've trained a 1024x1024 StyleGAN3-T model on WikiArt. The model gets a FID score of 8.1 and will hopefully serve as a useful starting point for fine-tuning. Link and samples in the thread below 👇
https://twitter.com/Buntworthy/status/1459254801473196033

Justin Pinkney on Twitter

“Following in the illustrious footsteps of @pbaylies I've trained a 1024x1024 StyleGAN3-T model on WikiArt. The model gets a FID score of 8.1 and will hopefully serve as a useful starting point for fine-tuning. Link and samples in the thread below 👇”

Twitter

Trained an Image Variation version of Stable Diffusion:

---
RT @Buntworthy
Released my "Image Variations" version of Stable Diffusion. Get the code and models, along with some basic instruction, in my GitHub repo: https://github.com/justinpinkney/stable-diffusion
https://twitter.com/Buntworthy/status/1566744186153484288

GitHub - justinpinkney/stable-diffusion

Contribute to justinpinkney/stable-diffusion development by creating an account on GitHub.

GitHub

Created Text-to-Pokemon by doing one of the first Stable Diffusion fine tunes:
---
RT @Buntworthy
Fine tuning #stablediffusion to make Pokemon!

I wrote a quick guide on fine tuning your own Stable Diffusion: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning

I also released my Pokemon model, you can try it out on Replicate: https://replicate.com/lambdal/text-to-pokemon
or with this Notebook: https://github.…
https://twitter.com/Buntworthy/status/1572214507468099586

examples/stable-diffusion-finetuning at main · LambdaLabsML/examples

Deep Learning Examples. Contribute to LambdaLabsML/examples development by creating an account on GitHub.

GitHub

Had a paper accepted to BMVC on connecting CLIP to StyleGAN:
---
RT @Buntworthy
Our paper "clip2latent" has been accepted to BMVC2022! 🎉

clip2latent uses a diffusion prior to generate StyleGAN latents from CLIP text encodings, adding text-to-image generation to any exiting StyleGAN!

arXiv https://arxiv.org/abs/2210.02347
GitHub https://github.com/justinpinkney/clip2latent
https://twitter.com/Buntworthy/status/1577976991126462465

clip2latent: Text driven sampling of a pre-trained StyleGAN using denoising diffusion and CLIP

We introduce a new method to efficiently create text-to-image models from a pre-trained CLIP and StyleGAN. It enables text driven sampling with an existing generative model without any external data or fine-tuning. This is achieved by training a diffusion model conditioned on CLIP embeddings to sample latent vectors of a pre-trained StyleGAN, which we call clip2latent. We leverage the alignment between CLIP's image and text embeddings to avoid the need for any text labelled data for training the conditional diffusion model. We demonstrate that clip2latent allows us to generate high-resolution (1024x1024 pixels) images based on text prompts with fast sampling, high image quality, and low training compute and data requirements. We also show that the use of the well studied StyleGAN architecture, without further fine-tuning, allows us to directly apply existing methods to control and modify the generated images adding a further layer of control to our text-to-image pipeline.

arXiv.org

Fine tuned a version of Stable Diffusion for smaller resolution called miniSD: https://twitter.com/Buntworthy/status/1580575210641960961?s=20

(the model is available here btw: https://huggingface.co/lambdalabs/miniSD-diffusers)
---
RT @Buntworthy
Something I trained a while ago but never shared was #stablediffusion fine-tuned at 256x256. It's quite nice for quickly exploring/prototyping prompts as it's pretty fast.

If people are intere…
https://twitter.com/Buntworthy/status/1580575210641960961

Justin Pinkney on Twitter

“Something I trained a while ago but never shared was #stablediffusion fine-tuned at 256x256. It's quite nice for quickly exploring/prototyping prompts as it's pretty fast. If people are interested I can share the model.”

Twitter

Implemented Imagic for Stable Diffusion:
---
RT @Buntworthy
Got Imagic running with Stable Diffusion, it's super easy to implement, will share a notebook soon!

Left: Input image, Right: Edited "A photo of Barack Obama smiling big grin" https://twitter.com/_akhaliq/status/1582175757153230849
https://twitter.com/Buntworthy/status/1582307817884889088

AK on Twitter

“Imagic: Text-Based Real Image Editing with Diffusion Models abs: https://t.co/xRW6F6w2ZG”

Twitter

Trained a super-resolution Stable Diffusion model (before SD2 introduced an "official" one): https://twitter.com/Buntworthy/status/1593266512474824704?s=20

(You can get that model here: https://huggingface.co/lambdalabs/stable-diffusion-super-res)
---
RT @Buntworthy
Comparing my SD upscaler and rivers':

- original
- @rivershavewings latent upscaler
- My SD upscaler https://twitter.com/RiversHaveWings/status/1589724378492592128
https://twitter.com/Buntworthy/status/1593266512474824704

Justin Pinkney on Twitter

“Comparing my SD upscaler and rivers': - original - @RiversHaveWings latent upscaler - My SD upscaler”

Twitter

Helped to put together content for a DreamBooth workshop at NeurIPS 2022:
---
RT @stephenbalaban
This Monday 11/28 at NeurIPS, Lambda is hosting an expo workshop led by @chuanli11 and me. You’ll learn to fine tune stable diffusion to produce portraits like these. Note it’s at 9:30am! Not 7:30am!

Monday 11/28 9:30am-12:30am central time
Room 293
NeurIPS Conference
https://twitter.com/stephenbalaban/status/1597073601798496257

stephen balaban on Twitter

“This Monday 11/28 at NeurIPS, Lambda is hosting an expo workshop led by @chuanli11 and me. You’ll learn to fine tune stable diffusion to produce portraits like these. Note it’s at 9:30am! Not 7:30am! Monday 11/28 9:30am-12:30am central time Room 293 NeurIPS Conference”

Twitter

Gave a talk on monkeying with Stable Diffusion as part of the @huggingface Diffusion Models Course live event:
---
RT @huggingface
To inspire you for our just-released Diffusion Models Course 🎓 with @johnowhitaker
we are excited to share the free online event with @hardmaru, @deviparikh, @Buntworthy, @robrombach, @pess_r and @multimodalart on Nov 30th at 18h CET🎋

Register here: https://huggingface.us17.list-manage.com/subscribe?u=7f…
https://twitter.com/huggingface/status/1597248942353584131

Got to play with the latest and greatest hardware from @LambdaAPI

---
RT @Buntworthy
I think I've done very well in staying focussed prepping slides for this talk while these 8 things sit waiting for me in the terminal window next to it.
https://twitter.com/Buntworthy/status/1597986740941492226

Justin Pinkney on Twitter

“I think I've done very well in staying focussed prepping slides for this talk while these 8 things sit waiting for me in the terminal window next to it.”

Twitter

Released an upgraded version of my Image Variations model:
---
RT @Buntworthy
📣2️⃣ Stable Diffusion Image Variations v2 release! 2️⃣📣

It took a while but I released the updated checkpoint for the image variations model. It was trained longer and more carefully than the original and the image quality and similarity is better.
https://twitter.com/Buntworthy/status/1600518238240055296

Justin Pinkney on Twitter

“📣2️⃣ Stable Diffusion Image Variations v2 release! 2️⃣📣 It took a while but I released the updated checkpoint for the image variations model. It was trained longer and more carefully than the original and the image quality and similarity is better.”

Twitter

And finally released my Image Mixer model:
---
RT @Buntworthy
📢🌀 Released my "Image Mixer" model!

Mix up concepts in multiple images and words to generate novel pictures!

Try it on @huggingface spaces here: https://huggingface.co/spaces/lambdalabs/image-mixer-demo
https://twitter.com/Buntworthy/status/1616042269945085953

Image Mixer Demo - a Hugging Face Space by lambdalabs

Discover amazing ML apps made by the community

Of course I did lots more I can't talk about, but @LambdaAPI have always been super supportive and encouraging of all my open source work and models.

It's been a real pleasure to work with @chuanli11 @stephenbalaban @mitesh711 @eoluscvka @jmhummel__ and many more!

If you need a well equipped workstation, an amazing value cloud laser-focussed on deep learning, or a massive GPU cluster I can definitely recommend @LambdaAPI!

And as for what's next? I'm very excited to be joining @midjourney to work on hands down the best image generator in the world right now!