I think using LLMs for communication is bad.

Mostly that's because what you are trying to communicate is contained in the prompt you fed to the model. You're making the recipient effectively reverse-engineer your prompt from the output.

Just send me the prompt. I could always feed it into an LLM myself if I wanted to. But most likely, I won't have to because the prompt tells me what you are trying to say.