The developers probably added a pretty detailed prompt to the backend that the AI uses whenever someone asks it to do this. If I was making an AI, I would not risk it spitting out something scary when given this prompt.
I’m a pretty firm believer that Loab is a hoax “cryptid” due the unwillingness of the author to publish anything on the generation parameters. I think the author got one weird result from putting “Loab” in the negative prompt and then used img2img from there.
Anyone who shows images of a “cryptid” they discovered but refuses to show proofs is untrustworthy in my opinion.