Adding storage to SoftNAS® involves adding virtual hard disks (VMDKs on VMware) to the SoftNAS® VM. The first step is to decide how to connect the drives, then associate the disks' storage with the SoftNAS® VM as virtual disks. For example, consider a network of ten 300 GB 15K SAS drives attached to a VMware vSphere host. There are several ways to incorporate these drives into the SoftNAS®.
Option 1 - Add Hardware RAID Datastore and Virtual Disks
In this case, treat the 15K SAS drives just like any other RAID array created for a VMware vSphere host:
Configure and establish a RAID array
Use the vendor-supplied software that came with the disk controller
For example, a configuration of ten disks as RAID 6 (dual parity), plus one hot spare. This leaves seven data drives in the array.
In VMware vSphere, the disk array will appear as a single datastore to VMware. Add this storage in the usual way, using Add Storage menu in vCenter / vSphere client to create a datastore from the array. Call this datastore hwraid1.
Then, in VM Settings for SoftNAS®, allocate one or more VMDKs to the SoftNAS® VM in this new datastore hwraid1. In environments with ESXi 5.x or later, allocate one large VMDK so the entire datastore is allocated to SoftNAS as a single virtual disk. In environments using ESXI 4.x, the datastores are limited to 2 TB maximum, so allocate as many virtual disks as needed to add this storage to the SoftNAS® VM.
Thin-provisioned VMDKs are faster to back up later (using a VMware vSphere backup tool), since the only thing being backed up is the storage that's actually used.
Thick-provisioned VMDKs are slightly faster and may be preferred for higher-performance applications.
Hardware RAID generally provides the highest performance:
The disk controller is optimized for managing the RAID operations.
And all RAID overhead is handled in hardware.
LED indicators and other hot-swap functionality is handled by the vendor sofware (including failure notification and remediation).
When a disk fails, the hardware is optimized for rebuilding the array with the replaced disk (whether hot-swapped or manually swapped).
With a large number of disks (e.g., 48 or more), using hardware RAID with an optimal number of physical drives per RAID array provides significant performance advantages vs. very large single arrays (and hardware RAID rebuilds will be much faster this way).
After creating the RAID array, follow the usual steps in VMware vSphere / vCenter to add the array as a storage device and create a datastore. This datastore will then be used to create one or more VMDKs to be used as SoftNAS® data disks.
7 data disks
2 parity disks
1 spare disk
Option 2 - Add Disks Individually to VMware and use Software RAID
In this case, add each 15K SAS drive to the VMware vSphere host directly. The options in VMware vSphere are to either format each disk and create a corresponding datastore per disk device, or use the disks directly as raw disks. Whichever approach is chosen, the goal is to make the disks available to the SoftNAS® VM on a one-to-one basis; i.e., each disk's storage is mapped to the SoftNAS® VM as a separate VMDK.
SoftNAS® will map disks into one or more storage pools, and software RAID will be applied to each disk group. Software RAID provides may provide increased flexibility of administration, enabling the SoftNAS® administrator to more quickly and easily add, expand and manage RAID groups from the SoftNAS StorageCenter interface. Of course, software RAID is handled by the CPU, which adds overhead to the VMware vSphere system and SoftNAS® VM. In the event of a drive failure, the rebuild process also must take place in software, which is typically much slower than when handled by a hardware RAID controller.
The key at this stage is to map the disk drives to VMDKs and attach to SoftNAS®.
Disks mapped to datastores:
disk 1 — datastore1 disk 2 — datastore2 . . . disk 10 — datastore10 2
Disks mapped as raw devices:
disk 1 — rawdisk1 disk 2 — rawdisk2 . . . disk 10 — rawdisk10
Add VMDKs to SoftNAS® VM
Once an option has been chosen, proceed and connect the disk drives to the SoftNAS® VM as data disk VMDKs.
Inside of Linux (where SoftNAS® executes), each attached VMDK will appear as a block disk device. The devices will be named /dev/sdb, /dev/sdc, etc. - one Linux block device per data disk VMDK. These block devices appear as unpartitioned, raw disk devices inside of Linux, so the next step will be to partition the block devices with a GPT partition.
After adding the VMDKs to the SoftNAS® VM and partitioning the disks, they become available to assign to storage pools.
Do not remove the virtual disks attached to the VM after they are placed into production. Should this happen, the next time the drives are rebooted they will be renumbered and require an Import of storage pools. (To remove VMDKs for any reason, be aware of the implications).