this post was submitted on 06 Jun 2023
187 points (98.4% liked)

Asklemmy

43935 readers
529 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

I imagine there's excitement for the increase of activity but worries about the potential toxic side of Reddit coming along too.

I'd especially be interested in the Lemmy devs' opinions.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] darkfoe@lemmy.serverfail.party 14 points 1 year ago (1 children)

I fired up my own personal test instance so I can experiment with figuring out ways to reduce bottlenecks on the sysadmin/devops side - used to run the various PHP forums back in the day, so hoping to pass on some knowledge eventually.

I figure the toxic side(s) will gravitate towards instances that will tolerate their behaviour which is easier to deal with. Mods will be busy for a little bit though, and I wouldn't be surprised if registrations closed for a bit on some of the bigger instances so they can catch up if they don't just fall flat over on the heavy days. But, lots of smart folks trying to prep for this.

[โ€“] Valmond@lemmy.ml 4 points 1 year ago (2 children)

Any idea what hardware specs you need to run an instance? Like for 100 users, 1.000, 10k etc?

Or the hardware lemmy.ml runs on and the userbase?

[โ€“] darkfoe@lemmy.serverfail.party 11 points 1 year ago (2 children)

It's still a little unknown at this time what you need to handle X number of users, beyond a few hundred. Beehaw.org is pretty open about what they're using though in their financial statements if you're curious, but there's of operational optimization being tried out to see what'll help.

The stack is: postgres, pictrs, lemmy (Rust), lemmy-ui (nodejs), and nginx. RAM usage isn't too bad, but so far I see CPU and disk I/O (pictrs) as the limitation. Websockets are being removed which was another hurdle - would cause nginx worker threads to max out and drop instances off.

I'm on a 6$/month droplet as a reference for my single user instance and I'm subbed to a boatload of communities. So far I'm not having problems, but I made a 2GB swapfile for safety if RAM somehow spiked. CPU usage for me tends to spike when a community is being loaded for the first time due to image processing, but otherwise things are pretty idle.

[โ€“] TheDude@sh.itjust.works 5 points 1 year ago (1 children)

I'm looking forward to the increase in traffic tbh. I have setup a pretty beefy instance with a ton of monitoring on it so that hopefully after the wave I can create a nice write up on what it would take to scale lemmy in the future. I'll keep everyone updated with the results!

Yeah, this is a golden moment for those of us who like to learn from sudden heavy load on server software! There are not very many teachable moments like this out there, so I'm trying to soak everything up for work experience

[โ€“] Valmond@lemmy.ml 2 points 1 year ago (1 children)

I have an i-5 6core dell sitting so why shouldn't I spin up a node?

I'm mostly worried about maintenance and it breaking down one day, how do you deal with that in a good way?

Regular backups should do the job. It's all run in docker instances with mapped volumes, so you can just backup those contents regularly and roll-back worst case if things completely pooped out. Otherwise maintenance isn't really much worse than a normal webserver - great for learning Linux CLI if you're not already familiar.

No reason you shouldn't spin up a node though! The more the better - lets load spread out.

[โ€“] etienn01@lemmy.efesser.me 4 points 1 year ago

@Valmond@lemmy.ml according to this comment

The site currently runs on the biggest VPS which is available on OVH

So that's 8 vCore and between 8-32GB RAM