this post was submitted on 14 Jul 2023
41 points (100.0% liked)

Selfhosted

40149 readers
627 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Since Google is getting rid of my unlimited Gdrive and my home internet options are all capped at 20 megabits up, I have resorted to colocating my 125 terabyte Plex server currently sitting in my basement. Right now it is in a Fractal Define 7 XL, but I have order a Supermicro 826 2U chassis to swap everything over to.

This being my first time colocating I'm not quite sure what to expect. I don't believe I will have direct access since it is a shared cabinet. Currently it is running Unraid, but I'm considering switching to Proxmox and virtualizing TrueNAS. Their remote hands service is quite expensive, so I'd like to have my server as ready to go as possible. I'm not even sure how my IP will be assigned: is DHCP common in data centers or will I need to define my IP addresses prior to dropping it off?

If anyone has any lessons learned or best practices from colocating I would be really interested in hearing them.

you are viewing a single comment's thread
view the rest of the comments
[–] themoonisacheese@sh.itjust.works 20 points 1 year ago (2 children)

I'm sorry for the non-answsers in advance but here goes:

If you won't have easy access, consider server motherboards with KVM over IP capabilities. They really can get you far.

IP is generally Managed DHCP but I have seen DCs that just tell you your IP and good luck and it's on the honor system. Some of them even let you publish your own BGP blocks.

Basically best practices boil down to:

Data centers are businesses and as a costumer they should be answering your questions about their operating policies. If they aren't consider a different DC.

Don't be a dick to them, and don't be a dick to your network neighbors.

You're no longer behind a home router with a firewall that has sensible rules, so it is now up to you to avoid getting pwned and footing the power bill. It is also up to you to avoid spamming out stray traffic.

[–] Max_P@lemmy.max-p.me 16 points 1 year ago (2 children)

Never colocated, but did rent baremetal from OVH back when they didn't have any KVM and all you could do is wipe/reinstall, reboot and boot into a 2-3 releases old Debian recovery.

Definitely seconding the KVM remote access part: you really, really want that, or at least some way to hard reset your server if it crashes. I can't stress this enough. Even if you think you'll never need it, you never know when you'll have a kernel panic or need to do some boot troubleshooting, even just to run fsck. It's absolutely nerve-wracking to reboot a server you don't have any way to access other than SSH and looking at that ping window for 2-5 minutes while the thing boots back up and wondering it it will come back online or not.

If you don't have IPMI and can't have some sort of KVM for your server, I highly recommend having at least a PiKVM or something in there to be able to do remote troubleshooting. Ideally I also recommend (if no IPMI) setting up some sort of preboot environment you know will reliably boot (maybe something entirely in initramfs) that will boot up, get network and listen for SSH for a couple minutes before chainloading back into the main OS so that you can at least turn off firewall/reset network to known good. Anything that will give you remote access independently of your main OS.


At least I had access to the recovery environment from OVH, but even then, that thing took a full boot cycle to boot up + some more time for them to deliver the credentials by email (that better not be hosted on that box itself), change a config file, reboot again. Legit 10-15 minutes between each attempt and little to no way of knowing what happens until you boot the recovery again. It was horrifying, can't recommend.

IPMI saved my ass a few times and I'm never getting another box without it.

[–] themoonisacheese@sh.itjust.works 9 points 1 year ago (1 children)

Tbh I worked on a campus where we had total free access to our bays in the local DC (like 5 minutes away by car), even in the dead of night we just had to make a call to not get stopped at the door, and even then IPMI is still just so much more convenient than sitting on the floor with your laptop, a VGA screen and PS2 keyboard among your tools in a loud DC with mandatory earplugs and an eye on the nitrogen fire supression that really has no reason to trigger but it could and that is terrifying.

Or you could have IPMI and be sat at your desk with coffee and listening to music. Your choice really, I wonder why iLO licenses are so expensive :P

[–] Notorious@lemmy.link 4 points 1 year ago (1 children)

I have a spare Pi4 sitting around the house that I could pretty cheaply turn into a PiKVM. Looks like there are some slick hats to install into a PCI-E slot so I don't have a Pi and a bunch of wires hanging out in the chassis. Looks like I'll be going that route. Just need to figure out how to power it (they all seem to require external 5v or POE).

Consumer motherboards have some USB ports that have standby power @2A. Or the power supply has a 5vsb rail as well, that's where that comes from.

[–] Notorious@lemmy.link 4 points 1 year ago (2 children)

Does the IPMI or KVM go on a private network of some sort? Surely you wouldn't want to expose that to the internet.

Usually you define a VLAN dedicated to your IPMI devices, only accessible through an access-controlled way (usually, VPN served by the firewall but don't do that if you're virtualizing the firewall for obvious reasons). The DC might offer a VPN of their own specifically for this purpose, or you can pay them for more space to install a physical firewall but that's a more significant investment.

Ultimately best practices say not to expose the IPMI to the internet, but if you really have no choice and your thing is up to date then you only need to fear 0-days and brute force attacks, the login pages are usually pretty secure since access is equivalent to physical access. You will attract a lot of parasite traffic probing for flaws though.

[–] Max_P@lemmy.max-p.me 3 points 1 year ago

Usually yes. That's something you might want to discuss with the datacenter what they have to offer that way, some will give you a VPN to be able to reach it. But I don't have experience with that, my current servers came with IPMI and I can download a Java thing from OVH to connect to it.

[–] Notorious@lemmy.link 4 points 1 year ago

This is good info! I'll follow up with the provider. Unfortunately even though I live in a large city, of the two dozen or so places I contacted only two of them would consider less than a half rack.

consider server motherboards with KVM over IP capabilities

I had not considered this. My plan was to initially just swap the consumer grade stuff I have over to the 826 since it supports ATX, but now I'll reconsider. Remote KVM has come in handy a few times with my dedicated servers over the years, so lacking that would suck pretty bad. I don't know that I won't have access, but several of the other providers stated on their websites that shared cabinets won't have physical access (which I honestly would prefer since I'll have several thousand dollars in hardware sitting in there).

Data centers are businesses and as a costumer they should be answering your questions about their operating policies. If they aren’t consider a different DC.

Great point and I totally agree! Just didn't want to walk in like a complete noob asking a bunch of dumb questions if I could prevent it.

You’re no longer behind a home router with a firewall that has sensible rules, so it is now up to you to avoid getting pwned and footing the power bill. It is also up to you to avoid spamming out stray traffic.

Thankfully I've got quite a bit of experience hardening servers exposed directly to the internet. *knocks on wood* So far I've managed to not get pwned by turning on automatic security updates, keeping open ports limited to ssh with password/root login disabled and reverse proxying everything. If I need access to something that doesn't need to be exposed I just port forward through ssh.