this post was submitted on 22 Jul 2023
5 points (100.0% liked)

datahoarder

6792 readers
1 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
 

Current System

My current system lives inside a Node 804 case and includes the following components:

I'm using btrfs to set up two storage pools, each containing one 8TB and one 18TB drive. The second pool serves as a backup for the first one.

The setup has been running well for approximately 2 years. However, I'm gradually running out of storage space and the upgradability is not as good as I'd like it to be. Both the PSU and the Motherboard only have one available SATA connector left because the boot drive also uses one.

Plan / Options

Knowing me, and this hobby I anticipate a gradual and ongoing addition of drives to my system so I want that process to be as simple as possible. After doing some research I was thinking about separating the drives from the host system. The plan would be to store the drives in a JBOD/DAS enclosure where power and data needs of the drives are met and then connecting that back to my host somehow.

To me, stepping into "Enterprise" hardware land is new and honestly a little intimidating so I wanted to get some input from the more experienced people around here.

The "plan" I came up with so far doesn't sound that complicated. As far as I understand I'd want the following (Sorry for the terminology):

  • A JBOD enclosure with min. 12 hot swappable SATA bays, a PSU and SAS output in the back
  • A HBA that goes into the second PCIe slot on the B450 Aorus M
  • A compatible SAS? cable
  • A small rack to mount the enclosure in

Given all the components are compatible this setup would allow me to add a new drive to the JBOD, see the drive "raw" on my host (this is what "IT mode" is for on the SAS cards I believe), format it and add it to one of the two existing pools.

Things I need help with

Obviously picking the right components is the biggest challenge. Many posts here are suggesting the NetApp DS4246 enclosure as a good pick. They are available at good prices and there's room for many drives.

But there are some open questions for me regarding the DS4246:

  • Are my drives compatible? I've read numerous times that the max drive capacity is 4TB. I find that quite hard to believe and this may be due to the age of the posts I've read but I just want to be sure.
  • Why are there ususally (like here) two pairs of SAS and ethernet connectors in the back? How would I connect from the given interface to my host server? (Again sorry for the terminology)

And another more general question:

  • What are the deciding factors when it comes to choosing the right HBA and cables, the prices I've seen so far range from 35$ to 800$ in both categories. This comment suggests the DS4246 in combination with a LSI-9201 8/16e HBA. What are the specs I have to look for to see if that might be compatible. The same goes for finding a compatible cable.

Is my strategy viable in general? What are things I probably have not thought about?

I'm hoping you can help me in resolving some of my questions and improving my storage setup

Thank you for taking the time to read this!

top 2 comments
sorted by: hot top controversial new old
[–] Bread@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago) (1 children)
  1. If they are saying there is a 4tb limit, that might have something to do with sas backplane of the ds4246. If it is older, then this might be a real concern that requires you to buy a new backplane.

  2. Good ole' fashioned redundancy. You probably don't need it for what you are doing. You would need an hba card that supports that connector.

  3. The right hba and cables depends on what you are trying to connect your drives to. Whether it be directly to the drives, to a backplane with individual drive connectors, or to a backplane with one or two cables. How you decide you want to approach it depends entirely on what you want and how much you want to spend. I couldn't tell you about what is in the ds4246.

You have a problem though, that second slot is only a pcie x4, so you can get a maximum of 8 additional drives without losing any speed. You can get a card that goes over that limit, but then you are limited by bandwidth. To get more drives at full speed, you will need to remove the gpu or use the 1x slot for another drive or two.

If you want my personal recommendation, unless you already have a server rack or plan to need one for other servers, just get a large pc case like the fractal define 7 xl, upgrade your power supply, and buy an hba card and cables that allow you to connect to as many drives as you need individually. Why would I recommend this? Because enterprise servers like the ds4246 are loud as fuck. You don't really realize how much sound matters until you live with a jet engine. That and because adding a jbod is adding more complexity than you need for your build. More parts to fail. You could probably get your required parts off ebay for cheaper if you'd like.

There is also the question of whether or not your board supports hba cards at all, so keep that in mind for your research.

If you need more drives than that eventually, you are going to need to upgrade to a motherboard that supports your needs. At that point would it be worth looking into a rack mount for more than the 18 drives the fractal case can handle.

If you want more information, I need to know if you plan to use my idea or not. As of right now, you can't properly use the ds4246 chassis without removing the gpu and using that pcie slot. Or use a card that allows you to but limits your total performance.

If you don't mind me asking, what is the main purpose of this server? If you need the gpu, I assume you are using it for plex or something.

[–] guidable@feddit.de 1 points 1 year ago

The server is mainly used for Jellyfin, Nextcloud and as a general purpose family server, it also lives in my parents utility room so noise is an issue as I don't want to annoy them but it's not like I'm putting a jet engine in their living room.

It was running without a GPU at fist but then I realized I had that (pretty old) GPU laying around. I could switch the GPU into the lower pcie x4 slot and give the HBA the top pcie x16 slot so it has the full bandwith.

Thank you for your recommendation! Following up on this I have a couple questions:

  • Why would I need a HBA in this scenario? A SATA expansion card, that goes into my pcie x16 slot would do too right?
  • Upgrading the PSU would be necessary! Both for the number of SATA connectors available and the total power output. Do you have a specific PSU in mind that you can recommend?

If you need more drives than that eventually, you are going to need to upgrade to a motherboard that supports your needs. At that point would it be worth looking into a rack mount for more than the 18 drives the fractal case can handle.

While I like your plan and think it'd give me enough room to grow for the foreseeable future I'd love to skip this step and go directly to rack mounted hardware that supports connecting drives in to a backplane that delivers power and data connectivity for the entire shelf and running a single cable to my host system. The main downside here for me is the noise. I came across many of these "JBOD enclosures" from Icy Box which sound great in theory but only connect through USB3 or eSATA back to the host system, which I don't know if that's enough connectivity for that many drives.

load more comments
view more: next ›