If you're filling out fake reports to overload an Orwellian snitch website, just remember that authentic-looking fake reports waste more time than obvious, jokey ones
@amydentata Ollama can run LLMs locally, reducing overall energy consumption to your on-demand usage rather than a whole cloud farm, and can be used to generate really convincing gibberish.
As seen in the broader world of LLMs of course.

