Let's test my theory about the bots harvesting links from here on Mastodon.

These links have not been seen anywhere so far, they're unique to this post.

https://another.evilgeniusrobot.uk/test-on-mastodon-only

https://an.evilgeniusrobot.uk/test-on-mastodon-exclusive

I suspect these will start to show up in the logs at some point. I exclude the #MastoDDoS effect from the stats, that doesn't count.

#Thinkbot has been pretty fast on the uptake but it could be that those posts got boosted widely, I dunno.

[Hmm.. "clever hand sparrow", indeed :) ]

#botnet

Test On Mastodon Only

So about that #MastoDDoS issue. I just made a test post.

With my 730 followers that created a pretty steady stream of about 5 requests per second during half a minute, with some stragglers popping up every now and then afterwards.

I have a static Hugo blog so this is "nothing" - but if I would have ten times the number of followers I guess there would be a lot more than 150 instances fetching during that time. Let's go with 1000, so 30 requests per second instead.

At which point would this become an issue? 30 requests per second is nothing in the world of SaaS, but the Mastodon DDoS effect seems to have a noticeable impact on smaller sites as well as creators that while they might have a huge following of blog readers those would all be humans clicking at human speeds.

10Mbit/s is still a common connection speed globally, also for very low cost VPS. That's around 1 Megabyte per second in actual transfer speed, so divided by 30 that means that the preview Mastodon requests can be no larger than 33kb in size before we start running into problems. As soon as it's larger than 33kb any buffers in the system will start to back up - and this includes the IP-stack in the OS as well! Here's a good article on that subject - if you're using the defaults that might well be why a site becomes unreachable: https://www.cyberciti.biz/faq/linux-tcp-tuning/

On a VPS I assume the networking equipment is capable, but if you're self hosting out of your home you need to look into whether your switches, routers and firewall are up to the task as well.

This is *not* a "Mastodon problem" - but Mastodon can help alleviate it. Even at 100Mbit/s connection speed a single image can be larger than 330kb so I think the math holds.

@renchap investigated the options a while back. I'm thinking Solution 2. Let me, the person including the link, "own" all the content of my post - including the preview.

https://gist.github.com/renchap/3ae0df45b7b4534f98a8055d91d52186

ping @MarkPrince and @ele whom I know have seen this too.

Linux Tune Network Stack (Buffers Size) To Increase Networking Performance - nixCraft

I've two servers located in two different data center. Both server deals with a lot of concurrent large file transfers. But network performance is very poor for large files and performance degradation take place with a large files. How do I tune TCP under Linux to solve this problem?

So just as an experiment, I tried reposting our Threads federated post about our Ultimate Coffee Gear Wish List here via this account.

And yup, it took our website down for a few minutes. It's still down with all the card requests as I type this.

So no more of that, sadly.

#fediverse #mastoddos

So, random idea about Mastodon and WordPress sites, and the unfortunate circumstance of a link on the former often resulting in an accidental DDoS on the latter. Something that, for example, forced @coffeegeek to not post links to their articles and @jwz to just block access to Mastodon crawlers entirely.

Of course it's easy to blame Mastodon (partially deserved, there's steps and suggestions to make the hugs less lethal) and/or WordPress (not the fastest CMS on a good day, apart from the community drama), but what about a solution (or at least slightly better workaround)?

What if you write static HTML with just enough in there to render a preview (so the title, some og metadata, and so on, but not much else) and instruct the webserver / reverse proxy to, when the user agent implies Mastodon, serve that instead of handing off the request to a resource hungry CMS?

Even if you get a couple of thousand requests dogpiling in, if it's just static content, you should be able to handle that on anything more powerful than a potato, right?

So that's an addition to your CMS (to write the static files on creation/change) and a few lines in your .htaccess or webserver config, and you're done. The static content shouldn't take that much room, and either way storage is cheaper then having your server hugged to death.

And yes, this shouldn't be the problem that it currently is, this should be solved on the Mastodon end and not by the individual website owners. But here we are.

#mastodon #mastoddos #wordpress

Ok I didn't realise just posting something while having a link in your profile could provoke the #mastoddos, because various servers that haven't seen you for a while check for rel="me" stuff. Luckily I have a tiny and entirely static site but that's a bit silly

I have to stop posting active links to our content on Mastodon.

Every time I do so now, it brings down our website for up to 5 minutes.

We've tried pretty much every claimed fix, including third party caching (which in turn breaks other elements of our website's dynamic display abilities), code changes and such on our back end code, and more stuff I don't understand at all (but have spent money paying our WP developer to implement). None of it has worked.

The #fediverse powers that be need to fix this growing problem of the #MastoDDos effect on websites. The more followers and more servers your followers are from, the more impact this has on literally bringing a website to its knees with all the DB calls.

For instance, this morning, I posted the lovely article our creative writer Ethan wrote, which ended up only getting 2 boosts and one "favourite" here, but it brought down our website for 4 minutes and 12 seconds.

That's not sustainable.

@sturmaugen @rf зато существует #MastoDDoS

I've never been sure whether the reason I've never experienced the #MastoDdos effect is that nothing on my site has ever been shared widely enough or because most of it is static files and I have a static cache for the parts that aren't.

https://boostie.social/@courtney/113034655455201552

Courtney 💖 (@[email protected])

Devs: "These URL preview images only get pulled once per social network, and there will only ever be 4 or 5 of those. We don't need to make generating them fast nor efficient" Other devs: *create Fediverse platforms that pull preview images once per server, at minimum* Devs: "Why is Mastodon DDOS'ing our poor sweet websites 😧😭" https://fediverse.zachleat.com/@zachleat/113034398962322306

Boostie

Случайно узнал о забавном явлении #MastoDDoS (вот тут есть подробнее с другими пострадавшими).

Если кратко, то проблема такая:
- популярный блогер вешает ссылку на статью на своём сайте в Феди
- его пост распространяется на серверы подписчиков и все эти серверы идут по ссылке и стягивают превьюху
- сайт под нагрузкой ложится, ведь туда за короткое время прилетела пара десятков тысяч одинаковых запросов, причём лежать может довольно продолжительное время

Даже не представлял, что такое возможно, ведь запросы небольшие, разовые и одинаковые, да и сколько этих узлов в Феди наберётся активных, тысяч двадцать? Но поскольку у всех динамические движки, скрипты и картинки в изобилии, то этого хватает, даже Клаудфларь не справляется, поэтому просят решить проблему со стороны Мастодона (правда, непонятно как).

Вспомнился Синдром Кесслера, но для веба.
#ПрекрасноеНастоящее
RE: flipboard.social/users/coffeeg…

Please don't share our links on Mastodon | Hacker News

@hrefna @coffeegeek The devil is in the defaults… I had the same problem with my generic WordPress site until I enabled Automattic’s WP Super Cache plug-in. Perhaps the best way WordPress could support the social web is for it to improve its caching defaults for bursty traffic in general.

#MastoDDOS