🐌 Oof, #lemmy is slowwww atm
@icarurs Which lemmy?
@ruud @icarurs yeah I think lemmy.world has been pretty snappy compared to days ago. Still occasionally get some failures browsing through the #liftoff app
@Lifecoach5000 @icarurs What kind of failures? How often? We did get a lot fixed today https://lemmy.world/post/1061471
Lemmy.world status update 2023-07-05 - Lemmy.world

Another day, another update. More troubleshooting was done today. What did we do: - Yesterday evening @phiresky@[email protected] [https://lemmy.world/u/phiresky] did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github. - @[email protected] [https://lemmy.ml/u/cetra3] created a docker image containing 3PR’s: Disable retry queue [https://github.com/LemmyNet/lemmy/pull/3468], Get follower Inbox Fix [https://github.com/LemmyNet/lemmy/pull/3482], Admin Index Fix [https://github.com/LemmyNet/lemmy/pull/3483] - We started using this image, and saw a big drop in CPU usage and disk load. - We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws. - We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs - We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set proxy_next_upstream timeout; max_fails=5 in nginx. Currently we’re running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the proxy_next_upstream timeout; max_fails=5 workaround but for now it seems to hold with 1. Thanks to @[email protected] [https://lemmy.world/u/phiresky] , @[email protected] [https://lemmy.ml/u/cetra3] , @[email protected] [https://discuss.as200950.com/u/stanford], @[email protected] [https://lemmy.dbzer0.com/u/db0] , @[email protected] [https://lemmy.world/u/jelloeater85] , @[email protected] [https://lemmy.world/u/TragicNotCute] for their help! And not to forget, thanks to @[email protected] [https://lemmy.ml/u/nutomic] and @[email protected] [https://lemmy.ml/u/dessalines] for their continuing hard work on Lemmy! And thank you all for your patience, we’ll keep working on it! Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs. [https://lemmy.world/pictrs/image/166fc6d9-972d-4ff2-aa3a-b2ecbbb90cd5.png] Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that’s now started, and I noticed the proxy_next_upstream timeout setting didn’t work (or I didn’t set it properly) so I used max_fails=5 for each upstream, that does actually work.

@ruud @icarurs well I would occasionally get timeout errors for comments loading when clicking on threads in #liftoff. I can say as it stands right now I can’t reproduce it after clicking 20 something threads. Thank you all for what you’re doing btw
@ruud Was seeing some slowness on lemmy.world today but it looks to be a lot better now! Thank you for the work you've been doing
@icarurs Lemmy.world has been unusable for days. Even when working it has All kinds of issues when posting, up voting or even just moving from page to page of results. I highly recommend you try a smaller instance. I moved over to lemm.ee and it works great.
@MyOpinion I started off in Lemmy.world and then moved over to a smaller instance (reddthat.com), but even the smaller instance was having some issues the past couple days. Seems a bit better now, though!