| Atproto Account: | @chaotichuman.eurosky.social |
| Atproto Account: | @chaotichuman.eurosky.social |
To all anti-AI-artists who claim "AI-art is just typing in some prompts.": here is an AI-generated image and the custom #ComfyUI-workflow I used to generate it.
Me: *prompts a non existent series ("Shadows of Alagadda") and episode ("The Hanged Kings Trials") styled like an IMDB-entry that is entirely based around concepts from the SCP-multiverse*
#LLaMa: *writes an incoherent non-existent Doctor Who episode that is basically also a character-crossover with many other franchises except anything SCP-related (ok, the name "Hanged King" is still mentioned in the output but probably only because of being in the episode-part of my prompt)*
Anyway: here are the mask (I created with Blender & Gimp) I used as input for the #controlnet-mlsd-model to test it & three resulting outputs (generated with txt2img not img2img).
I used the controlnet-extension (https://github.com/Mikubill/sd-webui-controlnet) for the automatic1111-webui and Deliberate v1.1 (https://civitai.com/models/4823/deliberate) as #StableDiffusion model.
You can find the controlnet-models under https://huggingface.co/lllyasviel/ControlNet
Just found out it's possible to merge the #InstructPix2Pix- with the #riffusion-model by using the receipe in the image of this post.
And the most interesting part here is the resulting instructPix2Pix-riffusion-model indeed still only outputs spectograms however the results are otherwise not that good (I guess the reason is the GPT3-component of instructPix2Pix was not optimized for spectograms) but it's still interesting that this merger kinda works.
Attached: 1 image · Content warning: speech by ChatGPT
lol (und ja, ich war tatsächlich mal Mitglied der #Piratenpartei).