@[email protected] @[email protected] @[email protected] @[email protected] @[email protected] By the end of 2017 my efforts to improve resolution were obliterated by two major breakthroughs: in short succession @[email protected] first showed their highly realistic celebrities made with #PGAN and followed up with #pix2pixHD shortly after.
https://twitter.com/quasimondo/status/928604109602770945?s=20
Mario Klingemann on Twitter

“Here's my unsuccessful attempt to have my model "improve" those @NvidiaAI generated celebrity portraits.”

Twitter
Both models are shallow ResNets derived from #Pix2PixHD. I first tried #pix2pix UNets, but there the models learned to cheat very quickly and just abused the first skip connection to pass the information almost uncompressed.
The principle is pretty simple: in a classic residual architecture you chain several residual blocks behind each other (in #pix2pixHD the default is 9 blocks), what I do in #RecuResGAN is to use a single block, but loop 9 times over it, feeding its output back into its input.
I've created an experimental GAN architecture I call #RecuResGAN or "Recursive-Residual GAN" and I am pretty astonished that:
- it works at all
- how well it works across a pretty wide range of scales.
- it is just 15% the size of a comparable #pix2pixHD model