I think it may be time to start discussing how much longer we can onesie-twosie deal with spam from mastodon.social.

#lawfedi and #lawyer or #legal adjacent folks that haven't hopped to a smaller instance and are still on the main one, please consider making a move. We've had more spam reports today than in the last 6 months combined.

It doesn't have to be to https://esq.social if you don't want to, but we'd of course be happy to have you.

@law #mastoadmin

esq.social

A Mastodon instance for legal professionals and folks in law-adjacent fields. We abide by the Mastodon Server Covenant, available at https://joinmastodon.org/covenant

Mastodon hosted on esq.social

@law

Just to be clear, if we can avoid defederating from mastodon.social by any stretch we absolutely will. It is the biggest player by a large margin.

That said, the possibility of it happening at some point in the future is certainly more likely today than it was yesterday.

#lawfedi

@andrew @law We (mastodon.social) blocked this spammer less than 15 minutes after they started. Reports continued flowing for hours after than when users saw the messages, but our response has been very quick.
We are also working on more tools to prevent this to happen (not for mastodon.social, but for every server).
I am really sorry this is happening.
@andrew @law This kind of attack could target any server with open reg (and luckily did not yet), and most wont react as quick.
Spam need to be fought and we are very actively working on it, but if we start to defederate based on this then the Fediverse in a whole will suffer

@renchap

Thanks for all the above and below. I hope you came across the post I made clarifying my initial statement to make clear I am not impugning anyone's efforts.

My tacit question remains, though, do you think moderation to where those kinds of spam attacks don't happen to begin with is possible with an instance of m .social's size?

@andrew (switching to unlisted) Yes it is definitely possible. A big instance can have a full-time + around-the-clock moderation team + technical staff, which is something we kind of have (and are working to improve).
I am very afraid of the moment one of those people will figure out they can run their scripts against any open-registration instance, and without very reactive admins, those instances will get limited/defederated very quickly by everyone, killing them. This is not good at all.
@andrew This is why I think (personal opinion here, dont read it as Eugen's or anybody else!) that we need built-in tools to fight this so any instance operator can use them, but also a way to use one (or multiple) external spam/moderation "providers”, which will be able to staff such teams and pay for IP reputation / spam fighting / … APIs to alleviate this from individuals.
This is only the beginning of bad people discovering Mastodon, and so far we have been lucky its only this kind of spam.
@renchap Love this idea. Thanks for all of your hard work, I can't speak for other instance admins but personally it isn't going unnoticed or unappreciated.
@andrew @renchap do you think it can be avoided at any smaller size? At some point once the broad technology becomes generally accepted as a spam vector, (and we'll just think of Mastodon as one thing for the moment), every nook and cranny can be attacked the same way. Think of usenet in that regard... Once the endless September began no group was safe from turd blooms of noise. De federating wouldn't be the best solve in that case...

@zeruch @renchap

Well it can be solved from an origination standpoint at a small size. Imagine an instance of 1 with closed signups. Scale that up slowly only insofar as you can continue to closely monitor and approve each new signup. Voila.

If you then treated federation as a default opt-out situation, you could mimic the same process there. Each new instance you federate with would be considered and only on the strength of their moderation.