this post was submitted on 11 Jan 2025
57 points (96.7% liked)

Selfhosted

41005 readers
484 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

you are viewing a single comment's thread
view the rest of the comments
[–] thejml@lemm.ee 3 points 1 day ago (1 children)

Nice, we’ll all look out for an update in a year!

I try to mix brands and lots (buy a few from one retailer and some from another). I used to work for a storage/NAS company and we had many incidents when we’d fill a 12 or 24 drive raid with drives right from the same order and had multiple drives die within hours of each other. Which isn’t usually enough for replacement/resilvering.

[–] Cyber@feddit.uk 2 points 23 hours ago* (last edited 23 hours ago)

Yep, seen a similar thing with servers...

A few years ago I built up a system with ~ 20 servers. Powered them all up and did all the RAID initialisation (RAID5 across 6-8 disks per server IIRC)

One server basically needed all it's disks replacing and some of the others needed a disk or 2 replaced - within a month!

Since replacing those disks and building all those arrays I'm happy to build a NAS / server, let it bed-in for a while and if nothing fails I'll just keep powering up & down my NAS as needed and I'll run the drives until they die...