The first two WebPs (WAN 2.1) show a cat on a motorcycle (front view and side view). They're based on a test prompt from Z-Image, adapted for motion.
The third WebP is my (so far never published) first attempt with Stable Video Diffusion from early January 2026... image-to-video instead of text-to-video.
Model: stableVideoDiffusion_img2vidXt11.safetensors
First generated a still image with SD1.5, then added subtle motion using this model.
All rendered locally on my RX 6700 XT
#StableDiffusion #SD15 #SDV #Img2Vid #AIAnimation #LocalAI #AMD #StableVideoDiffusion #ComfyUI #AIVideo #VideoGeneration #OpenSource #FOSS #ROCm #RDNA2 #AIGenerated #CreativeAI #ExperimentalAI #wan21
The third WebP is my (so far never published) first attempt with Stable Video Diffusion from early January 2026... image-to-video instead of text-to-video.
Model: stableVideoDiffusion_img2vidXt11.safetensors
First generated a still image with SD1.5, then added subtle motion using this model.
All rendered locally on my RX 6700 XT
#StableDiffusion #SD15 #SDV #Img2Vid #AIAnimation #LocalAI #AMD #StableVideoDiffusion #ComfyUI #AIVideo #VideoGeneration #OpenSource #FOSS #ROCm #RDNA2 #AIGenerated #CreativeAI #ExperimentalAI #wan21













