I'm sure everyone notices the huge delay in the streaming API. I'm basically working non-stop on trying to mitigate that. 4000 toots per hour is just more activity than current processing queue can keep up with

I'm sorry if I wasn't able to answer everyone's messages! My notifications move too fast

@Gargron Good luck, you're doing an impressive work. Thanks for everything.
@Gargron Is there the same limit for global number instances? In other words, if we get thousands of instances, with few user on it, wouldn’t it lead to congestion at the whole network level?
@Gargron Thanks for your job on mastodon.
It's a great internet project !
The name on remote instance to tag someone is tooooo ugly lol. @Gargron
@Gargron good luck with the job!
@Gargron I heard that there's a dev chat to join. Is that on Freenode, or somewhere else?
@Gargron we are aware of the load on your project, you're the best <3

@Gargron I have faith in you, you can do the thing! :)

Need more nodes to run your stuff on. Perhaps someone like me or @hergertme or @elopio could help with that.

@Gargron Keep up the great work. The effort is really showing in the quality of things around here.
@Gargron What you do is good and you should feel good. Thank you for doing this. If there's any way we can help, let us know. #DevOps
@Gargron you know tooting is another word for farting right?
@Gargron You're doing great. Keep up the good work 🙏
@Gargron do you want experienced devops help? I'm very comfy with AWS infrastructure.
@Gargron Thanks for all the hard work. Make sure to take breaks and relax. :)
@gargron good luck! Scaling is hard — but a great problem to have 😆

@Gargron Just curious, what hardware is this instance running on? 4000 toots/h does not seem like *a lot* to me…

OTOH, Ruby is slow :( I think going with a compiled language could easily multiply that with 10.