SparkVSR is an opensource video super resolution tool that takes low resolution video and restores it to high quality, but with one difference that separates it from everything else in this space. You can control the output using keyframes. #ai
https://firethering.com/sparkvsr-video-upscaling/

#opensource #ai #upscale

SparkVSR lets you control AI video upscaling with just a few keyframes

A research team from Texas A&M and YouTube quietly dropped SparkVSR on GitHub. No big announcement or hype cycle. Just a repo and a paper. Everyone right now is chasing text to video. Sora, Kling, Wan, the list keeps growing. But nobody is talking about the much harder problem sitting right underneath all of it. What happens when your existing footage, your old clips, your AI generated videos, just do not look good enough? You upscale them, the AI guesses, and you get flickering textures and smeared faces with zero way to fix it. SparkVSR is the first tool I have seen that actually lets you step in and correct that.

Firethering

Do anyone actually like the #upscaling functionality in #videogames? Play lower resolutions but #upscale it through features on the graphics card?

All of my experiences lead to: higher resolutions, but grainy, worse looking and with ghosting artifacts + a game-breaking amount of input lag.

No thanks, I'd rather just have shittier graphics and enjoy my game :D

Filtered through a cinematic "Imitation of Life" 📸🦩🦩🖼
#Photoshoot #Exhibit
#Upscale #Flamingos
#Decorative #Humble
#Expressive #Botique
#Luxuriant #Effulgent
@fdroidorg #ImageToolbox gets #AI is correct. It is local and works if the phone is not providing compute, this new feature is not fun to use. However, #DeJPEG, #upscale, #colorization, #enhancement is now available. Background removing offers also more models for the #FLOSS version.
YES SUCCEEDED!!!

Just rendered an image at 944×1152 (slightly above 1024×1024) using Flux1-Schnell-FP8 on my 6700 XT, and it works! (Image 1 is the Real-ESRGAN 2× upscaled version)

Workflow 1: Sampling (Image 2)

Prompt executed → UNet generates the latent

Step 1 (model load + latent generation) took 419 seconds

Output: Latent tensor saved to disk

Workflow 2 : VAE Decode (Image 3)

Latent loaded → VAE decodes the image

Duration: 7.5 seconds

Advantage: UNet doesn’t need to stay in VRAM → VRAM freed, even on 12 GB GPUs

The problem with the stock LoadLatent Node

Dropdown only shows files if they were produced / annotated by a previous SaveLatent Node

Node is designed to pass latents inside a graph, not load arbitrary files from disk

Purpose: prevents accidentally loading wrong files

Workaround (Image 4)

Edited /ComfyUI/nodes.py, class LoadLatent

Hardcoded latent path → Node now loads directly from disk

Result: Workflow 2 runs instantly, UNet can be unloaded

Timing

Step 1 (model load + latent generation): 419 s

Step 2 (VAE decode): 7.5 s

Result: High-res images on a 12 GB RDNA2 GPU are now possible on Flux1-Schnell-FP8 without ComfyUI crashing! (Image 5 is the original output)

This might actually become my new Flux workflow: render quick 512×512 previews first (which works perfectly on RDNA2 GPUs), sort out the good ones, extract the seed from the PNG metadata, and then re-render only the selected images with the same seed using the split workflow at higher resolutions. This way, high-resolution Flux1-Schnell-FP8 renders become possible on 12 GB RDNA2 GPUs D:

Question at the end: Has anyone ever done this before? Because I have no clue xD

#ComfyUI #flux #Flux1SchnellFP8 #FP8 #AMD #RDNA2 #VAE #AIArt #Pixelfed #HighResolution #GPUOptimization #LatentWorkflow #AIWorkflow #AIHacks #RealESRGAN #Upscale #AIExperiment #CreativeAI #DigitalArt #AICommunity #python #linux #opensource #foss
Rendered Garfield images using Flux on my RX 6700 XT.
Includes both fully 3D and classic comic style bench-press scenes.
Upscaled the outputs with Real-ESRGAN

#Garfield #Comics #3DKunst #RealESRGAN #Katze #DigitalArt #3D #FluxAI #3DArt #Upscale #Cat #3DRendering #RX6700XT #Flux #aiart #LocalAI