Keep in mind that the upcoming Lemmy update will probably fix this I think. (Replacing websockets)
Lemmy.World Announcements
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news 🐘
Outages 🔥
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations 💗
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
Yes I really hope so!!
Maybe this is a dumb question, but why would replacing websockets speed things up? I read the Wikipedia page on it, but I guess I don’t understand it fully.
In general websockets scale badly because the server has to keep open a connection and a fair amount of state. You also can't really cache websocket messages like you can normal HTTP responses. Not sure which reasons apply to Lemmy though.
Thanks for the explanation!
The caching problem is definitely part of it from conversations on GitHub.
I really hope someone is doing some level of performance testing on those changes to make sure the changes fix the performance issues.
Just hopping into the chain to say that I appreciate you and all of your hard work! This place—Lemmy in general, but specifically this instance—has been so welcoming and uplifting. Thank you!
At least the "reply" button goes away so I don't end up double- triple- or even duodecuple-posting! Thanks for all the hard work that must be going on behind the scenes right now!
I kept getting a timeout message from Jerboa which led me to think I hadn't been posted. So I ended up submitting the same joke to the Dad Jokes sub three times. Which actually is how dad might tell that joke.
Lemmy is now your digital dadlife assistant.
I think in that case it's a feature not a bug.
I get this occasionally with Jeroba too, I had assumed it was because I'm on Mint and the connection is shoddy but maybe it's an issue with the client.
maybe related, but I've noticed that upvoting/downvoting has similar lag delays
Same. It would be good to fix this
I can't up/down vote at all.
Been noticing this in the app I’m working on. Pretty much all POST requests fail to return a response and just timeout after 60 seconds. A quick refresh shows that the new items do successfully get created though.
I assume that there is something that is O(N), which explains why wait time scales with community size (amount of posts, comments)
Oh, Big-O notation? I never thought I’d see someone else mention big O notation out in the wild!
:high-five:
you are going to meet a lot of OG redditors in the next few weeks. Old reddit had Big O in every post, even posts with cute animals.
Again, thank you for the outstanding work! You are awesome!
Also, the new icon for lemmy world is great!
Have you tried enabling the slow query logs @ruud@lemmy.world? I went through that exercise yesterday to try to find the root cause but my instance doesn’t have enough load to reproduce the conditions, and my day job prevents me from devoting much time to writing a load test to simulate the load.
I did see several queries taking longer than 500ms (up to 2000ms) but they did not appear related to saving posts or comments.
These things are taking 15-20 seconds though.
Thank you for your hard work and keeping us up to date.
Does this behaviour appear on other big instances? E.g. lemmy.ml?
Yes. Absolutely does happen on other instances that have thousands of users.
Great, so it's reproducible and Lemmy-the-app related, not instance-specific. Should be fixable across the board once it's identified and resolved.
Yes it does, tried this workaround before.
Thanks for your and the other Lemmy devs work on this. These growing pains are a good thing as frustrating as it can be for users and maintainers alike. Lemmy will get bigger and this optimization treadmill is really just starting.
In my case, the page keeps spinning but the post is not submitted, regardless of reloading the page or waiting for a long time. There was one case where I cut down significantly on the amount of characters in the post and then it posted, but I have been unable to replicate this.
I have the same issue with image posts. If I submit them through the app the posts counter on my profile goes up, but there's no post. I also can't retrieve any posts for my own account. It says I have 3 but it shows none.
Comments work OK so I'm not sure what the problem is. I was worried I got restricted or something.
Oh my god I'm so fucking stupid. If you hide posts you've seen it'll also hide your own posts...
@ruud@lemmy.world Yo dude, first off huge props and a big thank you for what you have setup. I’ll be donating monthly while I am here. I appreciate that we have an alternative to Reddit at this critical moment in time.
I do have a question on your long term plans, do you want to continue to expand and upgrade the server, as funding allows, or is there a cap that you will close off the server to new members? Or perhaps make it more of a process to join?
Well if all the Reddit users would get over to Lemmy I guess all servers would need to scale up... but I think the server we have now is powerfull enough to grow quite a lot, as long as the software gets tuned ..
ok, so it's not just me. Hope it gets resolved soon!
Thanks for posting the workaround and for working to resolve the issue. Lemmy is a great place, and a real breath of fresh air after Reddit.
its def more hung up today, oddly its only first level replies for some reason
Thank you so much
I noticed that too, page keeps spinning but comments are posted immediately anyway.
I’ve done this twice in the last 20 minutes and the content is not there. This workaround was working earlier today though.
I noticed this, thanks for the clarification
This is the biggest issue I have run into. Thanks for looking into it.
One of the large applications I was working on had the same issue, to solve it we ended up creating multiple smaller instances and started hosting a set of related API's in each server.
for example read operations like list posts, comments etc could be in one server. write operations can be clusered in one server.
Later, whichever server is getting overloaded can be split up again. In our case 20% of API's used around 3/4th of server resources, so we split those 20% API's in 4 large servers and kept the remaining 80% API's in 3 small servers.
This worked for us because the DB's were maintained in seperate servers.
I wonder if a quasi micro-services approach will solve the issue here.
Edit 1: If done properly this approach can be cost effective, in some cases it might cost 10 to 20 percentage more in server costs, however it will lead to a visible improvement in performance.
Is the slowdown that it the instance has to send out updates about the comment to every other instance before returning a successful response? If so, is anyone working on moving this to an async queue?
Sending out updates seems like something that’s fine being eventually consistent
Ooh that’s a good remark ! I’ll see if that’s the cause
Reading more about how this works, sending out updates to each instance shouldn’t block the request from returning unless you have a config flag set to debug source.
It might be due to poorly optimized database queries. Check out this issue for more info. Sounds like there are problems with updating the rank of posts and probably comments too
So it looks like YOU SOLVED THE ISSUE with this reply! This led me to check the debug mode, and it was on! It turned that on when I just sterted the server and federation had issues....
We no longer seem to have the slowness!!
That’s awesome! Thanks for hosting the server!