this post was submitted on 08 Feb 2024
41 points (97.7% liked)

Selfhosted

40211 readers
1226 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am thinking of extending my storage and I don't know if I should buy a JBOD (my current solution) or a RAID capable enclosure.

My "server" is just a small intel nuc with an 8th gen i3. I am happy with the performance, but that might be impacted by a bigger software RAID setup. My current storage setup is a 4-bay JBOD with 4TB drives in RAID 5. And I am thinking of going to 6 x 8TB drives with RAID 6 which will probably be more work for my little CPU

top 34 comments
sorted by: hot top controversial new old
[–] poVoq@slrpnk.net 14 points 9 months ago (2 children)

Normally I would say software, or rather a raid-like filesystem like btrfs or ZFS. But in your specific case of funneleing it all through a single usb-c connection it is probably better to keep using an external box that handles it all internally.

That said, the CPU load of software raid it very small, so that isn't really something to be concerned with, but usb connections are quite unstable and not good for directly connecting drives in a raid.

[–] BentiGorlich@gehirneimer.de 2 points 9 months ago (1 children)

I mean I've been running the setup this way for >4 years and never had any problem with the USB connection, so I cannot attest to "usb connections are quite unstable"...

[–] poVoq@slrpnk.net 7 points 9 months ago (2 children)

I supposed that is because the JBOD box was handling the raid internally so short connection issues are not that problematic and can be recovered from automatically. But that wouldn't be the case if you connected everything together with a usb hub and usb to sata adapters and run a software raid on that.

[–] Auli@lemmy.ca 2 points 9 months ago* (last edited 9 months ago)

I don't know usb c in thunderbolt has direct access to pcie lanes.

[–] avidamoeba@lemmy.ca 1 points 9 months ago* (last edited 9 months ago)

I've been running a 4-disk RAIDz1 on USB for 4 years now with zero failures on one machine and one failure on another where it turned out the USB controller in one WD Elements was overheating. Adhering a small heatsink on it resolved the problem and it's been stable under load for 2 years now. The USB devices have to be decent. AMD's host controllers are okay. VIA hubs are okay. ASMedia USB-to-SATA are okay. I'm using some enclosures with ASMedia and some off-the-shelf WD Elements that also use ASMedia. It's likely easier to get a reliable system if installing disks internally as the PSU and interconnects are much more regulated and any would work well, whereas with USB you have to be careful in selecting decent components.

[–] avidamoeba@lemmy.ca 1 points 9 months ago
[–] atzanteol@sh.itjust.works 11 points 9 months ago* (last edited 9 months ago) (1 children)

The argument for hardware RAID has typically been about performance. But software RAID has been plenty performant for a very long time. Especially for home-use over USB...

Hardware RAID also requires you to use the same RAID controller to use your RAID. So if that card dies you likely need a replacement to use that RAID. A Linux software RAID can be mounted by any Linux system you like, so long as you get drive ordering correct.

There are two "general" categories for software RAID. The more "traditional" mdadm and filesystem raid-like things.

mdadm creates and manages the RAID in a very traditional way and provides a new filesystem agnostic block device. Typically something like /dev/md0. You can then use whatever FS you like (ext4, btrfs, zfs, or even LVM if you like).

Newer filesystems like BTRFS and ZFS implement raid-like functionality with some advantages and disadvantages. You'll want to do a bit of research here depending on the RAID level you wish to implement. BTRFS, for example, doesn't have a mature RAID5 implementation as far as I'm aware (since last I checked - double-check though).

I'd also recommend thinking a bit about how to expand your RAID later. Run out of space? You want to add drives? Replace drives? The different implementations handle this differently. mdadm has rather strict requirements that all partitions be "the same size" (though you can use a disk bigger than the others but only use part of it). I think ZFS allows for different size disks which may make increasing the size of the RAID easier as you can replace one disk at a time with a larger version pretty easily (it's possible with mdadm - but more complex).

You may also wish to add more disks in the future and not all configurations support that.

I run a RAID5 on mdadm with LVM and ext4 with no trouble. But I built my RAID when BTRFS and ZFS were a bit more experimental so I'm less sure about what they do and how stable they are. For what it's worth my server is a Dell T110 from around 12 years ago. It's a 2 core Intel G850 which isn't breaking any speed records these days. I don't notice any significant CPU usage with my setup.

[–] dan@upvote.au 3 points 9 months ago (1 children)

I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn't match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.

ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I've only got two 20TB drives in my NAS though, so a mirror is fine.

[–] atzanteol@sh.itjust.works 1 points 9 months ago

If I were to redo things today I would probably go with ZFS as well. It seems to be pretty robust and stable. In particular the flexibility in drive sizes when doing RAID. I've been bitten with mdadm by two drives of the "same size" that were off by a few blocks...

[–] Paragone@lemmy.world 11 points 9 months ago

I read somewhere, years ago, that RAID6 takes about 2 cores, on a working server.

That may have been a decade ago, and hardware's improved significantly since then.

Bet on 1 core being saturated, min, with heavy use of a RAID6 or Z2 array, I suspect..


I'd go with software raid, not hardware: with hardware RAID, a dead array, due to a dead controller-card, means you need EXACTLY the same card, possibly the same firmware-revision, to be able to recover the RAID.

With mdadm, that simply isn't a problem: mdadm can always understand mdadm RAID's.

_ /\ _

[–] lemming741@lemmy.world 10 points 9 months ago (4 children)

My guy Wendell says that Hardware Raid is Dead and is a Bad ldea in 2022

https://www.youtube.com/watch?v=l55GfAwa8RI

[–] vikingtons@lemmy.world 5 points 9 months ago (1 children)
[–] PipedLinkBot@feddit.rocks 1 points 9 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=Q_JOtEBFHDs

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] bazsy@lemmy.world 3 points 9 months ago* (last edited 9 months ago)

There is an even more relevant video of using external storage trough USB. He recommends using software raid:

Can We Build a Home Server Out of Mini PCs?

[–] BentiGorlich@gehirneimer.de 2 points 9 months ago

Very informative, thank you :)

[–] PipedLinkBot@feddit.rocks 1 points 9 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=l55GfAwa8RI

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] MangoPenguin@lemmy.blahaj.zone 8 points 9 months ago (2 children)

Don't do a RAID enclosure, just get one that exposes the disks straight to the OS.

[–] BentiGorlich@gehirneimer.de 3 points 9 months ago (1 children)

Problem for me is: there is not a 6 bay enclosure and the 8 bay enclosures cost as many as a RAID capable one

[–] fuckwit_mcbumcrumble@lemmy.world 1 points 9 months ago

I'd pay more money for a non raid enclosure.

[–] wurstgulasch3000@lemmy.world 1 points 9 months ago

I just set up my icy box drive bay with software raid. Works great, just remember in some cases you have to disable UAS for the enclosure

[–] BentiGorlich@gehirneimer.de 5 points 9 months ago (1 children)

Since hardware RAID is not state of the art anymore I will definetly stick with software RAID. I think I will just build a new server for the money, since an 8-Bay USB enclosure costs around 600€ and for that amount of money I can just build a new server with even better performance

[–] lemmyvore@feddit.nl 3 points 9 months ago (1 children)

While you're at it you can get a PC case with plenty of drive slots.. Check out Fractal Design.

[–] BentiGorlich@gehirneimer.de 1 points 9 months ago

Thats what I will be going for 😁

[–] Violet_McQuasional@feddit.uk 5 points 9 months ago

ZFS kicks arse. It's worth learning enough to get a basic array going, with a couple of datasets and encryption. Once you get acquainted with that, you'll be using it for years to come.

[–] avidamoeba@lemmy.ca 5 points 9 months ago* (last edited 9 months ago)

Software, software, software! ZFS, mdraid, etc. USB is fine even with hubs, so long as your hubs and USB controllers (USB-to-SATA) are decent and not overheating.

[–] possiblylinux127@lemmy.zip 2 points 9 months ago

Your CPU should support vfio so you could pass though a PCIe sata controller to Truenas

[–] Decronym@lemmy.decronym.xyz 2 points 9 months ago* (last edited 9 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
NUC Next Unit of Computing brand of Intel small computers
PCIe Peripheral Component Interconnect Express
PSU Power Supply Unit
RAID Redundant Array of Independent Disks for mass storage
SAN Storage Area Network
SATA Serial AT Attachment interface for mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #495 for this sub, first seen 8th Feb 2024, 12:25] [FAQ] [Full list] [Contact] [Source code]

[–] JASN_DE@lemmy.world 1 points 9 months ago (1 children)

How are those disks/the box connected to the NUC?

[–] BentiGorlich@gehirneimer.de 3 points 9 months ago

USB-C. It only has a single SATA connector inside

[–] originalucifer@moist.catsweat.com -4 points 9 months ago (1 children)

just my 2 cents, if youre going to do raid, buy a thing that will do it...

a nas or enclosure where the hardware does all the heavy lifting. do not build raided system from a bunch of disks... i have had, and have had friends have many failures over the years from those home brew raids failing in one way or another and its usually the software that causes the raid to go sideways.. mayvbe shits better today than it was 10-20 years ago.

its just off my list. i bought a bunch of cheap nas devices that handle the raid, and then i mirror those devices for redundancy.

[–] doubletwist@lemmy.world 3 points 9 months ago (1 children)

Y'all must be doing something wrong because HW raid has been hot garbage for at least 20years. I've been using software raid (mdadm, ZFS) since before 2000 and have never had a problem that could be attributed to the software raid itself, while I've had all kinds of horrible things go wrong with HW raid. And that holds true not just at home but professionally with enterprise level systems as a SysAdmin.

With the exception of the (now rare) bare metal windows server, or the most basic boot drive mirroring for VMware (with important datastores on NAS/SAN which are using software raid underneath, with at most some limited HW assisted accelerators) , hardly anyone has trusted hardware raid for decades.

[–] KnightontheSun@lemmy.world 1 points 9 months ago (1 children)

Y’all must’ve been doing something wrong with your hardware raid to have so many problems. Anecdotally, as an admin for 20+ years, I’ve never had a significant issue with hardware raid. The exception might be the Sun 3500 arrays. Those were such a problem and we had dozens of them. No lost data, but so many controller issues. I just left some of them beeping for a while in the server room since data was still being served.

Of course at the enterprise level we have sufficient redundancies built in, but I also use both hardware and software raid at home. No issues with either really.

[–] atzanteol@sh.itjust.works 1 points 9 months ago (1 children)

Y’all must’ve been doing something wrong with your hardware raid to have so many problems. Anecdotally, as an admin for 20+ years, I’ve never had a significant issue with hardware raid. The exception might be the Sun 3500 arrays. Those were such a problem and we had dozens of them.

So what were you doing wrong to have so much trouble with the Sun 3500's?

[–] KnightontheSun@lemmy.world 1 points 9 months ago* (last edited 2 months ago)