The first two WebPs (WAN 2.1) show a cat on a motorcycle (front view and side view). They're based on a test prompt from Z-Image, adapted for motion.

The third WebP is my (so far never published) first attempt with Stable Video Diffusion from early January 2026... image-to-video instead of text-to-video.

Model: stableVideoDiffusion_img2vidXt11.safetensors

First generated a still image with SD1.5, then added subtle motion using this model.

All rendered locally on my RX 6700 XT

#StableDiffusion #SD15 #SDV #Img2Vid #AIAnimation #LocalAI #AMD #StableVideoDiffusion #ComfyUI #AIVideo #VideoGeneration #OpenSource #FOSS #ROCm #RDNA2 #AIGenerated #CreativeAI #ExperimentalAI #wan21

I guess this video is one of my official flops. But I still think it would be funny if AMD received a handful of respectful bug reports on this issue 😀

No FSR 4 on RDNA 2 or 3? Send a Bug Report!

https://www.youtube.com/watch?v=_PsjkpgMWI0

#AMD #FSR4 #RDNA2 #RDNA3 #RDNA4 #Radeon #Gaming

No FSR 4 on RDNA 2 or 3? Send a Bug Report!

YouTube
Small making-of of my “True Beauty Is So Painful” piece (listening to “True Beauty Is So Painful” by Oomph! in the background), because “AI art = just pressing a button” is still a thing.

Here I’m showing briefly (15 MB max file upload) my SDXL workflow in ComfyUI, from node structure to model choice to parameters.

LoRAs in this setup are only linked to the positive prompt, because I wanted to fine-tune their weights there specifically, without affecting the negative prompt.

During rendering, I ran in parallel:
- GPU load with radeontop, you can clearly see how on RDNA2 everything (matrix multiplications, convs, etc.) runs over the shaders
- Temperatures & power states briefly shown with corectrl

Peak at 187 W, hotspot briefly at 97 °C
RDNA2 doing RDNA2 things…

Video workflow:
- recorded with OBS
- edited in Kdenlive
- transcoded with VAAPI (H.264)

No cloud, just decisions, iteration and real hardware.
Everything runs on Linux + ComfyUI (FOSS), so anyone can set this up.
No GPU? No problem, you can also run it using PyTorch’s CPU backend, just much slower.

#AIArt #ComfyUI #SDXL #stablediffusion #LoRA #FOSS #Linux #AMD #RDNA2 #GPUComputing #OpenSource #AIWorkflow #OBS #Kdenlive #VAAPI #DigitalArt #MakingOf #AIProcess #NoCloud
YES SUCCEEDED!!!

Just rendered an image at 944×1152 (slightly above 1024×1024) using Flux1-Schnell-FP8 on my 6700 XT, and it works! (Image 1 is the Real-ESRGAN 2× upscaled version)

Workflow 1: Sampling (Image 2)

Prompt executed → UNet generates the latent

Step 1 (model load + latent generation) took 419 seconds

Output: Latent tensor saved to disk

Workflow 2 : VAE Decode (Image 3)

Latent loaded → VAE decodes the image

Duration: 7.5 seconds

Advantage: UNet doesn’t need to stay in VRAM → VRAM freed, even on 12 GB GPUs

The problem with the stock LoadLatent Node

Dropdown only shows files if they were produced / annotated by a previous SaveLatent Node

Node is designed to pass latents inside a graph, not load arbitrary files from disk

Purpose: prevents accidentally loading wrong files

Workaround (Image 4)

Edited /ComfyUI/nodes.py, class LoadLatent

Hardcoded latent path → Node now loads directly from disk

Result: Workflow 2 runs instantly, UNet can be unloaded

Timing

Step 1 (model load + latent generation): 419 s

Step 2 (VAE decode): 7.5 s

Result: High-res images on a 12 GB RDNA2 GPU are now possible on Flux1-Schnell-FP8 without ComfyUI crashing! (Image 5 is the original output)

This might actually become my new Flux workflow: render quick 512×512 previews first (which works perfectly on RDNA2 GPUs), sort out the good ones, extract the seed from the PNG metadata, and then re-render only the selected images with the same seed using the split workflow at higher resolutions. This way, high-resolution Flux1-Schnell-FP8 renders become possible on 12 GB RDNA2 GPUs D:

Question at the end: Has anyone ever done this before? Because I have no clue xD

#ComfyUI #flux #Flux1SchnellFP8 #FP8 #AMD #RDNA2 #VAE #AIArt #Pixelfed #HighResolution #GPUOptimization #LatentWorkflow #AIWorkflow #AIHacks #RealESRGAN #Upscale #AIExperiment #CreativeAI #DigitalArt #AICommunity #python #linux #opensource #foss
AMD says that it’s not pulling driver support for older Radeon GPUs afterall https://arstechni.ca/PJ2N #Gaming #Radeon #rdna2 #Tech #rdna #AMD
After confusing driver release, AMD says old GPUs are still actively supported

Re-using old silicon means that dropping "old" GPUs can affect "new" products.

Ars Technica
#AMD is trying to drop support for "older" #RDNA2 -based graphics products even though those chips are still used in actively built and sold products. https://www.youtube.com/watch?v=dkPPejQXFNo (AMD stands for "Advanced Marketing Disaster")
AMD Says We're "Confused"

YouTube
Nach heftiger Kritik stellt #AMD klar: #RDNA1- und #RDNA2-#Grafikkarten sollen doch weiterhin Spiele-Optimierungen erhalten - allerdings nur "nach Marktbedürfnis". #Gaming #Radeon https://winfuture.de/news,154647.html?utm_source=Mastodon&utm_medium=ManualStatus&utm_campaign=SocialMedia
AMD korrigiert kontroverse GPU-Treiber-Entscheidung - gewissermaßen

AMD stellt nach großer Verwirrung durch sein jüngstes Adrenalin-Update klar: RDNA-1- und RDNA-2-Grafikkarten sollen auch weiterhin Spiele-Optimierungen erhalten - eventuell aber nur teilweise. Auch in anderen Punkten korrigiert man sich.

WinFuture.de
Mit dem neuesten #Adrenalin-#Update für seine #Radeon-#Grafikkarten beendet #AMD offiziell den Support für #Windows10. Zudem erhalten ältere #RDNA1- und #RDNA2-#GPUs in Zukunft keine Spieloptimierungen mehr. https://winfuture.de/news,154613.html?utm_source=Mastodon&utm_medium=ManualStatus&utm_campaign=SocialMedia
AMD Radeon: Ältere GPUs verlieren vollen Support, Aus für Windows 10

AMD beendet mit seinem neuesten Adrenalin-Update offiziell den Support für Windows 10 und schränkt zusätzlich Optimierungen für neue Spiele auf RDNA-1- und RDNA-2-GPUs ein. Besitzer älterer RX-5000- und RX-6000-Karten gehen damit in Zukunft leer aus.

WinFuture.de

The bane of the Steam Deck is #UnrealEngine5.

#Zen2 and #RDNA2 are power efficient but not enough to move games that are badly optimized as UE5, like #Borderlands4.

#UE5 makes the case for powerful handleds, but at that point, why not just get a laptop that will be a better value? Even the cheapest RTX will beat any handled any time.

#Videogames #Gaming #Games #Laptop #Hardware #Handleds #Handled #SteamDeck #Steam #Valve #Laptops #UnrealEngine #EpicGames #GameDevelopment