this post was submitted on 11 Jan 2025
57 points (96.7% liked)

Selfhosted

41005 readers
484 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

you are viewing a single comment's thread
view the rest of the comments
[–] TechnicallyColors@lemm.ee 1 points 10 hours ago* (last edited 10 hours ago)

I don't think 'cattle not pets' is all that corporate, especially w/r/t death of the author. For me, it's more about making sure that failure modes have (rehearsed) plans of action, and being cognizant of any manual/unreplicable "hand-feeding" that you're doing. Random and unexpected hardware death should be part of your system's lifecycle, and not something to spend time worrying about. This is also basically how ZFS was designed from a core level, with its immense distrust for hardware allowing you to connect whatever junky parts you want and letting ZFS catch drives that are lying/dying. In the original example, uptime seems to be an emphasized tenet, but I don't think it's the most important part.

RE replacements on scheduled time, that might be true for RAIDZ1, but IMO a big selling point of RAIDZ2 is that you're not in a huge rush to get resilvering done. I keep a cold drive around anyway.