Been messing around with a little #prototype #neuralnetwork. It behaves similarly to a #video #codec. It uses a frozen set of weights from #StableDiffusionXL and uses its latent space to carry forward to a set of networks that behave like the B-frame motion vectors used by codecs like #MPEG. These are smaller networks, letting me train it on my regular old #GPU while relying on the work of the big boys for generating I-frames via #SDXL.
At least that's the theory. Reality remains to be seen.
