this post was submitted on 05 Jul 2023
3062 points (99.2% liked)

Lemmy.World Announcements

29084 readers
292 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world/

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to info@lemmy.world e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

(page 5) 50 comments
sorted by: hot top controversial new old
[–] ikidd@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

My god, it's fast right now. Don't touch anything.

Good job.

load more comments (1 replies)
[–] Pregnenolone@lemmy.world 4 points 1 year ago

It’s very much helping the third party apps as well. Memmy is running way smoother now

[–] Xepher@lemm.ee 4 points 1 year ago (1 children)

Good job on troubleshooting!

Have you looked into possibly migrating to kubernetes or some other form of docker container management/orchestration system to help with automatic scaling and load balancing?

[–] ruffsl@programming.dev 3 points 1 year ago

Don't quote me, but I recall reading on GitHub that there are a few things to be refactored before Lemmy can support horizontal scaling approaches.

[–] Sylvannes@lemmy.world 3 points 1 year ago

Things are looking a lot more stable, thank you (all) for the amazing work!

[–] comcreator@lemmy.world 3 points 1 year ago

The instance seems to be much better. Posting and commenting is not taking as long and loading times are way better. I hope things can stay this good or even get better.

[–] Vlixz@lemmy.world 3 points 1 year ago

Amezing improvement! Lemmy felt so much more responsive suddenly today, now it makes sense why :3

That's a very phallic bandwidth image Dr Freud

[–] TrueStoryBob@lemmy.world 3 points 1 year ago

Running so much smoother! You guys rock!

[–] CthuluVoIP@lemmy.world 3 points 1 year ago

Things feel a ton better today. Thank you for all the hard work!

Everything is feeling great so far. The only bug I'm encountering is that when opening a thread (in Firefox on desktop) it auto-scrolls down past the content to the replies.

[–] snowe@programming.dev 3 points 1 year ago (1 children)

@ruud@lemmy.world is this docker container y'all are using available on a registry? We'd like to use it. And do you have a load balancer in front of your lemmy-ui image to allow two containers to run? or is that built in and I just never noticed it?

load more comments (1 replies)
[–] zacher_glachl@lemmy.world 3 points 1 year ago

Thank you for all your great work, and I really appreciate and enjoy the updates about the gritty details!

Thank you to everyone that helped.

[–] Tygr@lemmy.world 3 points 1 year ago

You all are champions today. When I loaded this up today, I was very happy and impressed. I knew tweaks were made.

[–] bluemoose@lemmy.world 3 points 1 year ago

Thanks for all the work you've put into this!

[–] trambe@lemmy.world 3 points 1 year ago

Can confirm, having way less problem browsing lemmy right now. Thank you admins!

[–] fubo@lemmy.world 3 points 1 year ago

Woo hoo, nice graphs! Monitoring data that shows big improvements is always fun to see.

[–] HybridSarcasm@lemmy.world 3 points 1 year ago

Wow! Your commitment and diligence is admirable!

[–] DuskLoaf@lemmy.world 3 points 1 year ago

Thanks for the update! Things seem way speedier now ^^

[–] RipleyRiley@lemmy.world 3 points 1 year ago (4 children)

Upvotes appear to be working but I still can't post images. The post displays the infinite spinning loading circle.

load more comments (4 replies)
[–] CreativeCider@lemmy.world 3 points 1 year ago

Well done guys!

[–] DV8@lemmy.world 3 points 1 year ago

Please recommend people to update their app in a topic title. Connect couldn't even load a topic without failing out today. An update fixed it, but I had to manually force it because it didn't apply automatically.

This will drive people away. Literally none of the communities I subscribe to on world even seemed to have a new comment.

[–] Landi@lemmy.world 3 points 1 year ago

Thanks for your efforts and improvements. Keep it going β™₯οΈπŸ™

[–] Denuath@lemmy.world 3 points 1 year ago

This update is a difference like night and day. Very impressive!

[–] Aurix@lemmy.world 3 points 1 year ago

The new updates made a ton of improvements. Thank you very much. It is near ideal now.

[–] p0ppe@lemmy.world 3 points 1 year ago

Great work! Awesome to see how fast the technical side of the Lemmyverse is evolving and improving!

[–] trouser_mouse@lemmy.world 3 points 1 year ago

Awesome update, great to see and feel the progress from all the hard work!

[–] Sterben@lemmy.world 3 points 1 year ago

Good job! πŸ€™

[–] Sinister_Grape@lemmy.world 3 points 1 year ago

Seemed to be running a lot smoother, thank you Ruud!

[–] kayaven@lemmy.world 3 points 1 year ago

The changes are very much noticeable, thanks as always everyone!

[–] ulu_mulu@lemmy.world 3 points 1 year ago

Fantastic job in troubleshooting and submitting PRs! Improvements are very noticeable already :)

[–] ralino@lemmy.world 3 points 1 year ago

Everything is much faster and smoother today. Thanks devs!

[–] unreachable@lemmy.my.id 3 points 1 year ago

am i amazed by you all guys work?!

i absolutely am

That's some pretty heroic shit right there.

You just took lemmy from something I'm willing to live with in the short term in hope it gets better, to something I am fully satisfied with.

Now let's grow so we can fuck it up all over again!!!

load more comments
view more: β€Ή prev next β€Ί