Can we help against spammers?

Is there any way we, as users, can help deal with the waves of spam-meds-bots? When I get the chance I downvote, but that's not possible for microblog. Do reporting them have any effect, or they go in the pile and are more a nuisance than a help?... #kbin #spam #kbinMeta

https://kbin.social/m/kbinMeta/t/958110

Can we help against spammers? - /kbin meta - kbin.social

Is there any way we, as users, can help deal with the waves of spam-meds-bots? When I get the chance I downvote, but that's not possible for microblog. Do reporting them have any effect, or they go in the pile and are more a nuisance than a help?...

Reporting them at the very least sends a message to the mods of the community the reported post/comment was on. Not sure about how/when it goes to instance admins, though. Which is where they really need to be reported to. Mods can block them from their community, but a spammer (human or bot) generally affects the entire server so it needs to go all the way to the top.

Blocking them also works to at least reduce the botsโ€™ effectiveness. If everyone blocks it, it isnโ€™t doing anything but wasting bandwidth, and if itโ€™s not having the desired effect whoever deployed it might give up.

Most of the communities with significant spam problems have no moderators other than ernest. It's up to him to recruit more people to help moderate those.
Mostly we need Earnst (or some other kbin develop) to develop more tools to combat Spam. This is easy to ask for, but not easy to implement.

This is easy to ask for, but not easy to implement.

The problem I see (and what influenced the tone of my other comment) is that I don't think I've seen any acknowledgement about any sort of filtering and this is a persistent problem. I get it, but also it seems really unnecessary to manually remove the 10 threads obviously not made by humans (or even just the 3 accounts that just popped up in a close time-frame).

It doesn't need to be perfect, surely any technique can be worked-around eventually but that also introduces extra steps (that spammers don't need to take now) that makes it harder and less likely. Doing so I think makes moderation much more viable and impactful.

Even just some sort of auto-spoiler/warning (multiple suspicious keywords in a non-relevant community, new user, 3 threads in an hour etc) could have an effect.