@lcamtuf Even if (especially if) a post says the right thing, I disengage as soon as I realise it is likely LLM-generated. This includes LLM images because imo there is no good reason to use slop images, there are plenty of free human images available with a bit of creativity. I want to read about human perspectives and experiences, and LLMs break this assumption big-time.

I also mistrust the intentions behind the person/bot writing slop because imo the bot, by definition, has no intention, but also reflects the various intentions of the (billionaire) creators (e.g. making the language a more watered-down version of the original intention).

Only exception is for security articles where the organisations reporting the issue decided to get a slop machine to write the prose, because the organisation reporting the issue is one of the best sources for this info even if they decided to slop write it. There's a couple of other edge-cases where I'll take AI slop over nothing at all, but those edge-cases never involve longform think-pieces.