Hi all. I’m trying to choose a configuration for my home storage. Speed is not a priority, I want a balance of stability and performance. I was thinking of making a raid 6 array with an ext4 file system for 4 disks of 2 TB each. Asking for advice, will this configuration be optimal?

Note - I am going to make a raid array based on external usb drives, which I will plug into the orange pi

  • chiisana@lemmy.chiisana.net
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I have 8x 8TB in a RAID 6 setup via vanilla md raid in an over provisioned server. Works well enough for me and my family’s needs. You’ll likely hear a lot of proponents for ZFS, and yes there’s no doubt it is more modern with some great features, but unless you’ve got your entire setup planned out, watch out for the hidden cost of zfs.

    • Snowplow8861
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      zfs is excellent. It’s enterprise and designed to suit the whole “I’ve got 60 disks filling up a 4ru top loaded SAN. If we expand we have to buy another 60 disk expansion.” and because of that it works perfectly for expansion. You don’t make a single raidz holding 60 disks. You’d make them in groups of say 6 or 8 or 10. Whatever suits your needs for speeds and storage and resilience. When you expand you drop another while raidz into the pool. Maybe it’s another 6 disks into the new storage shelf.

      But since your article in 2016, the openZFS project has promised us individual raidz expanding: In 2021 the announcement: https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/

      In 2022 an update for feature design complete but no code: https://freebsdfoundation.org/blog/raid-z-expansion-feature-for-zfs/

      The actual request is here: https://github.com/openzfs/zfs/pull/15022

      And the last announcement update was in June 2033 in the leadership meeting recorded here: https://m.youtube.com/watch?time_continue=1&v=2p32m-7FNpM

      You might think this is slow and yeah it’s snails pace. But it’s not from lack of work it’s really truely because it’s part of the entire strategy of making sure zfs and every update and every feature is just as robust.

      I’m a fan even having hardware fail and disks fail both in enterprise, and at home. Zfs import being so agnostic just pull in the pool doesn’t matter if it was BSD or Linux.

      • chiisana@lemmy.chiisana.net
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You’re missing the point. For most residential deployments, which appears to be the use case here in this thread, it is not viable to expect deployments pre configured with the intended final/stable quantity of drives. Say someone deploys 6 drives today, a sizeable commitment for most residential deployments, for a RAIDZ2 deployment, they cannot later down the line add 2 more, and then 2 more without affecting the overall redundancy (data on the new 2/2 drives will either have no redundancy or 1:1 redundancy in their own vdevs). Lets also not pretend everyone will easily have a spare cluster sitting around to house all current data while they’re rebuilding the cluster for expansion. This inability to linearly expand as compared to more conventional md raid (or even hardware raid assuming if you have enough ports), where the lowest denomination in expansion is 1 drive at a time, any given time, basically eliminates it as a suitable candidate for most residential usage, as vast majority of residential users will not be expanding their raid in quantities of 6, 8, or 10 drives — and even if they do, vast majority wouldn’t want to take the extra hit on “sacrificing” more drives to parity. All that are things that are perfectly normal and expected in the enterprise space, but not residential.

        And yes, the article I’ve linked has multiple updates which covers what is happening now. It’s not a stagnant outdated article. I’m well aware they’re intending to merge the vdev expansion PR, sometime this year, for the last I don’t even know how many years. I’ll re-evaluate ZFS when it is merged and appropriately battle tested in the wild.

  • badbytes@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Sounds like a plan. Only optimal if you situation calls for it. Some things to keep in mind, are is software or hardware raid. Also, backups out of this setup would be good. I like snapshotting, so that might be something you might want to think bout.

    • PigeonCatcher@l.antiope.linkOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah, i am going to use off-site backup as well. Is snapshotting a thing in ext4 setup? don’t wanna go with btrfs, i heard that raid 6 with it goes bad.

        • badbytes@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Yeah I’ve used lvm with snapshotting with great success. Raid is great, just remember it is only one measure for data retention. Another optional addition would be add a hot spare.

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          That’s what I missed during my setup. I’m currently stuck without the LVM layer. With LVM, I think you can also theoretically add lvmcache and throw SSDs in front of your RAID array to act as cache, thereby improving the performance of your array. That’s another thing worth considering if you’re just setting up now.