@modean987 I've been using all kinds of AI tools for images ever since #DeepDream happened in the 2010s, that thing with the puppyslugs. You input one image, select which neuron layer of the model you want to sample, and how many iterations you want, and you get a new image that somehow grew out of the old one. A little later, Neural Style Transfer aka #DeepStyle made it possible to mix two images, one for content and one for style. I made enormous volumes of images with that. #aiart
@modean987 After that we had all kinds of GANs to generate images, mostly very weird and often disturbing, some outright body horror, but I really like images like that. And then latent diffusion was discovered, yet I never used it when it was new because it was only available in the pay-for-play application Dall-E (1.0 back then). Stability were working on their open source implementation Stable Diffusion, but that took about a year until it became actually usable with v1.5
#aiart