I recently purchased a Dell PowerEdge R730 at a killer price, and intend it to be the cornerstone of my home lab. I plan to use it as both a NAS and a container server so I can set up whatever I want with it. I’m a bit unsure of what a good setup here looks like, so I’m hoping for a bit of guidance.

As my R730 has 16 drive bays, I intend for 10 of those to be high capacity HDDs for the NAS with the remaining spots for SSDs for the containers. The R730 will also have a PERC H730 RAID controller. I want a full featured NAS solution (although I am open to more lightweight solutions) so my go to thought is TrueNAS. My plan was to install Proxmox and run TrueNAS on top of it, but I am unsure if this is the best method. Does anyone have any insight on how well this works or if there’s a cleaner solution?

Addendum: Anyone have any recommendations for RAID setups? I currently have 4x900 GB 10k SAS Dell Enterprise drives but I intend to bump that up to 10x900 GB over time. I’d like to be able to add these without much hassle, but I’m unsure what to go with. It seems that ZFS can handle it well alone, but I don’t want to have gotten the good raid controller for nothing so I’m wondering if using ZFS with the RAID controller in HBA mode will be more worth it than a dedicated RAID setup. And if I’m using a RAID setup, should I go RAID or unRAID? If I go RAID, is RAID 01, 10, or 60 a better option here? Based on my research, it sounds like I’ll need a lot more drives for a proper RAID setup and it’ll be less flexible, but I would like some second opinions.

  • @[email protected]
    link
    fedilink
    English
    38 months ago

    Can you elaborate on the scenario this is solving for? Isn’t software RAID a performance hit?

    • @[email protected]
      link
      fedilink
      English
      68 months ago

      Its cheaper, has better visibility for drive health, and things like CoW means a file is extremely unlikely to be corrupt on a power failure (with hardware raid, you are relying on the battery in the raid controller for that protection. I guess you could run CoW ontop of a hardware raid). CoW also helps spread wear on SSDs.
      ZFS will heal data if it finds corrupted blocks, not sure that a hardware raid does.
      ZFS is the same anywhere, and is adjusted via software (as opposed to the dell PERCs which i believe require booting into essentially bios. Certainly ive never had the work through iDRAC), and you dont have to learn that raid controllers control UI (altho, they are never difficult).
      Its also another part that could fail and require like-for-like replacement. ZFS on satas just needs to be able to access the drive.

      I looked into it ages ago, and ZFS on HBA made so much more sense than a $300 used raid controller.

    • @[email protected]
      link
      fedilink
      English
      38 months ago

      For me only the case of inability to reassemble RAID array on different server (with different controller or even without it) for data recovery shouts a big “NO” to any RAID controller at home lab.

      While it is fun to have “industrial grade” thing, it isn’t fun to recover data from such arrays. Also, ZFS is a very good filesystem (imagine having 4.8 TB of data on 4 TB mirrored RAID. This is my case with zstd compression), but it isn’t playing well with RAID controllers. You’ll experience slowdowns and frequent data corruption.