this post was submitted on 08 Feb 2024
41 points (97.7% liked)

Selfhosted

40211 readers
1311 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am thinking of extending my storage and I don't know if I should buy a JBOD (my current solution) or a RAID capable enclosure.

My "server" is just a small intel nuc with an 8th gen i3. I am happy with the performance, but that might be impacted by a bigger software RAID setup. My current storage setup is a 4-bay JBOD with 4TB drives in RAID 5. And I am thinking of going to 6 x 8TB drives with RAID 6 which will probably be more work for my little CPU

you are viewing a single comment's thread
view the rest of the comments
[–] atzanteol@sh.itjust.works 11 points 9 months ago* (last edited 9 months ago) (1 children)

The argument for hardware RAID has typically been about performance. But software RAID has been plenty performant for a very long time. Especially for home-use over USB...

Hardware RAID also requires you to use the same RAID controller to use your RAID. So if that card dies you likely need a replacement to use that RAID. A Linux software RAID can be mounted by any Linux system you like, so long as you get drive ordering correct.

There are two "general" categories for software RAID. The more "traditional" mdadm and filesystem raid-like things.

mdadm creates and manages the RAID in a very traditional way and provides a new filesystem agnostic block device. Typically something like /dev/md0. You can then use whatever FS you like (ext4, btrfs, zfs, or even LVM if you like).

Newer filesystems like BTRFS and ZFS implement raid-like functionality with some advantages and disadvantages. You'll want to do a bit of research here depending on the RAID level you wish to implement. BTRFS, for example, doesn't have a mature RAID5 implementation as far as I'm aware (since last I checked - double-check though).

I'd also recommend thinking a bit about how to expand your RAID later. Run out of space? You want to add drives? Replace drives? The different implementations handle this differently. mdadm has rather strict requirements that all partitions be "the same size" (though you can use a disk bigger than the others but only use part of it). I think ZFS allows for different size disks which may make increasing the size of the RAID easier as you can replace one disk at a time with a larger version pretty easily (it's possible with mdadm - but more complex).

You may also wish to add more disks in the future and not all configurations support that.

I run a RAID5 on mdadm with LVM and ext4 with no trouble. But I built my RAID when BTRFS and ZFS were a bit more experimental so I'm less sure about what they do and how stable they are. For what it's worth my server is a Dell T110 from around 12 years ago. It's a 2 core Intel G850 which isn't breaking any speed records these days. I don't notice any significant CPU usage with my setup.

[–] dan@upvote.au 3 points 9 months ago (1 children)

I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn't match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.

ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I've only got two 20TB drives in my NAS though, so a mirror is fine.

[–] atzanteol@sh.itjust.works 1 points 9 months ago

If I were to redo things today I would probably go with ZFS as well. It seems to be pretty robust and stable. In particular the flexibility in drive sizes when doing RAID. I've been bitten with mdadm by two drives of the "same size" that were off by a few blocks...