We got three Seagate Constellation SAS drives (ST4000NM0043) drives and one thing about ZFS is that it needs to have identical drives because the number of sectors need to be the same. These were $319 and are 6GB drives.
To fill this out, looking at ZFS Solaris and Aaron’s best practices, we need to add a parity drive (raidz1) at 2+1, then go to raidz2 at 4+2 and then raidz3 at 6+3. This isn’t super good with a 5×4 array like we have in the Norcotek 4220. (I should have gotten the 4224!). The rule of thumb is power of 2 plus parity. So for instance with raidz2 that means the logical configurations are 4+2, 8+2, 16+2. They recommend zpool should be less than 9 drives according to the Solaris guide and less than 19 according to Aaron. With our 20 drive array that means we could build a single zpool of 16+3 at raidz3 and have 1 hot spare.
With zfs, if you turn on hot spare then you can have this. So the full configuration looks like 16+3+1. One complexity is that once you commit a drive to zpool, you can’t change it and you can’t change the size. This doesn’t have any hybrid raid like Synology or Drobo. Kind of means that you need to buy a single drive set and then make sure the spares you buy are definitely bigger.
As an aside with 4TB drives, this is absolutely a monstrous amount of storage (16×4=64TB) with triple redundancy and a hot spare. And it costs a lot too as each Seagate Enterprise Constellation V3 drive is $320×20=$6400!
So it makes more sense to perhaps leave some room for expansion. Use 10 drives for a current array means the other 10 can be used for growing another drive array in a hopscotch (assuming the array lives that long technically). Also with arrays this big, you can add SSDs to accelerate performance but this doesn’t have to be in the array itself, but they recommend SSDs for the log files and also for the L2ARC (the on disk cache). Seems like the new PCI Express NVMe SSDs will be perfect for that.
The Supermicro X10SRH motherboard that we are using has eight SAS 12GB controller onboard so if you want additional controllers, they are about $2-400 for eight more using another 8 PCI Express lanes. So the initial design might be 4+2+1 so 7 drives would give 4x4TB=16TB of reliable storage. Then the next growth step would be to add an additional controller ($200) to activate 16 total drives. This could be another zpool of similar configuration maybe using 6TB drives by that time. So then you could have 6TBx4x2.