So the #stablediffusion depth model is really awesome! I've been experimenting with a new workflow that uses a rough 3d blockout in #blender and then #depth2img to render it.
Video timelapse: https://www.youtube.com/watch?v=L6J4IGjjr9w
Using 3D and Depth2Img for Concept Art (Stable Diffusion & Blender)

YouTube
RT @TomLikesRobots: Another test combining #stablediffusion #depth2img with #ebsynth This time a cel-shaded animation from a video of me (Not usually so croaky and full of cold!). Background masked out. Need to see if better source using DSLR and lighting improves quality of end animation. #aiart QT @TomLikesRobots: A very quick test using depth guided #img2img and #EbSynth from @scrtwpns Temporal coherence is far better than vid2vid and #depth2img creates a really accurate keyframe. I need to do a deep dive into this. Masking and using AI generated environments? #stablediffusion #aiart [quoted tweet is unavailable] 2023-01-01 20:27:57 UTC
c0de517e/AngeloPesce on Twitter

“RT @TomLikesRobots: Another test combining #stablediffusion #depth2img with #ebsynth This time a cel-shaded animation from a video of me (N…”

Twitter
Using #stablediffusion2 #depth2img model to turn a 3D #daz model into a photo. Allows you to much more tightly control pose, lighting, hair, wardrobe, and manage fingers/ears. Link to my tutorial on Twitter below! #aiphotography #aicinema #aiart #MachineLearning #aiartprocess #aiia
Used #stablediffusion2 #depth2img model to render a more photoreal layer ontop of a walking animation I made in #UnrealEngine5 with #realtime clothing and hair on a #daz model. #aiart #MachineLearning #aiartcommunity #stablediffusion