...
...
...
...
...
...
Configuring VM Settings
Required Settings
After installing the OVF to create the SoftNAS Virtual Storage Appliance VM, configure the VM settings in accordance with best practices and network needs. The boot disk (Hard Disk 1) should be set to 100 GB, thin-provisioned. This is the default configuration, and should not be changed without reason.
For a quick benchmarking resource configuration, use 4 vCPUs and 16 GB of RAM. Configure storage and run benchmarking tools to observe resource utilization in the SoftNAS StorageCenter Dashboard charts and vSphere performance charts.
Note |
---|
RAM Note: The operating system and SoftNAS consume up to 1 GB of RAM, using most of the remaining RAM for cache memory and metadata. The more RAM assigned to the VM, the better read cache performance will be, as SoftNAS will keep as much data in RAM cache as possible. Consider this resource allocation for deduplication: at least 1 GB of RAM per terabyte of deduplicated storage, to keep the deduplication tables in memory (or supplement the RAM cache with a read cache device). |
Optional Settings
Paravirtual SCSI Disk Controller Support
For maximum throughput and IOPS on VMware, choose the Paravirtual SCSI Controller for the SoftNAS VM (instead of using the default LSI Logic Parallel SCSI controller).
VM Snapshot Mode
Before applying software updates to SoftNAS after it is in production, and to support online backups in popular backup programs, VM snapshots are useful as part of the backup and recovery process. Depending on the plan to manage backups of VM data, choose which mode snapshots will operate in.
- Independent Mode - to enable smaller VM snapshots, configure the boot disk, Hard Disk 1, in the "Independent" mode. This causes VM snapshots to apply only to this first hard disk by default (and not include all added data disks, which could be prohibitively large). The advantage of using Independent mode is VM snapshots will be faster and smaller.
- Dependent Mode - by default, VM snapshots include all hard disks attached to the VM. When used with SoftNAS and a VM backup process, this setting causes all SoftNAS VM disks to be backed up together as a set. This results in much larger backup sets, but may be preferable as a means of achieving additional protection and recoverability in the event of a disaster or need to restore the entire storage system to a different computer or location. If there are only a few terabytes to back up, this may be the prudent choice.
Network Adapter
On a typical 1 gigabit network, the default E1000 network adapter is sufficient; however, for a 10 gigabit or higher-performance network card, the VMXNET 3 network adapter should be used for best results and higher throughput. Note that installation of the VMXNET 3 requires installation of the proper VMware Tools in the guest operating system (in this case, CentOS 64-bit Linux).
Memory / CPU Hot Plug
It is recommended to allow CPU Hot Plug and disable Memory Hot Add, which will make it more convenient to add CPU later to use a lot of data compression or other features that consume additional CPU. Linux seems to do fine when additional CPU are added at run-time.
Note |
---|
...
Add memory with the system powered down and disable hot add of memory at run time. |
Performance Tuning for VMware vSphere
Achieving peak storage performance in the VMware environment involves tuning the VMware configuration beyond default values. The following are recommended best practices for tuning VMware for use with SoftNAS.
VMDirectPath
VMDirectPath provides a means of passing a disk controller device directly through to the guest operating system (i.e., CentOS Linux).
To enable VMDirectPath Configuration page in the vSphere Client
- Select the ESX host from Inventory.
- Select the Configuration tab.
- Select Advanced Settings under Hardware.
- Edit and select device(storage controller, physical NIC)
Note |
---|
...
Intel VT-d (or equivalent) processor feature is required for support of VMDirectPath. |
VM SCSI Controller - Set to Paravirtual
In VMware, change the SCSI controller type to "Paravirtual", which provides more efficient access to storage.
Physical NIC Settings
A host physical NIC can have settings, which can provide better utilization and performance improvement:
Most 1GbE or 10GbE NICs (Network Interface Cards) support a feature called interrupt moderation or interrupt throttling, which coalesces interrupts from the NIC to the host so that the host does not get overwhelmed and spend too many CPU cycles processing interrupts.
To disable physical NIC interrupt moderation on the ESXi host execute the following commands from the ESXi SSH session
...
Find the appropriate module parameter for the NIC by first finding the driver using the ESXi command:
...
:
Code Block | ||||
---|---|---|---|---|
| ||||
# Find the appropriate module parameter for the NIC by first finding the driver using the ESXi command: esxcli network nic list |
...
Then find the list of module parameters for the driver used:
...
# Find the list of module parameters for the driver used: esxcli system module parameters list -m <driver> |
Info | ||
---|---|---|
| ||
This example applies to the Intel 10GbE driver called ixgbe |
...
: |
...
esxcli |
...
system |
...
module |
...
parameters |
...
set |
...
-m |
...
ixgbe |
...
-p |
...
"InterruptThrottleRate=0" |
Note |
---|
Also, check the host for SR-IOV support, which provides additional performance and throughput in virtualized systems like VMware. |
Adjust Network Heap Size for high network traffic
By default, the ESX server network stack allocates 64MB of buffers to handle network data. Increase buffer allocation from 64MB to 128MB memory to handle more network data.
...
Info | ||
---|---|---|
|
...
...
Navigate to |
...
the Configuration tab for the ESX Server host |
...
and |
...
select Advanced Settings |
...
> VMkernel |
...
> Boot |
...
> VMkernel.Boot.netPktHeapMaxSize |
Virtual NIC Settings
VM’s virtual adapter has many tuning options, which can also provide much better throughput:
- Configure jumbo frames (MTU 9000) in vSwitch and virtual network adapter (be sure physical switch supports MTU 9000)
Info |
---|
We recommend VMXNET 3 virtual NICs. |
...
- To disable the virtual interrupt coalescing for VMXNET 3 virtual NICs
...
- open the vSphere Client and then navigate to VM Settings > Options tab > Advanced General > Configuration Parameters.
- Add an entry for ethernetX.coalescingScheme with
...
- a value
...
- of disabled.
Info |
---|
An alternative way to disable virtual interrupt coalescing for all virtual NICs on the host which affects all VMs, not just the latency-sensitive ones, is by setting the advanced networking performance option |
...
under Configuration |
...
> Advanced Settings > Net to CoalesceDefaultOn to 0 (disabled). |
Disable LRO
...
- Log into the
...
- SoftNAS VM as root using SSH
...
- or the Desktop Console
...
# modprobe -r vmxnet3
...
- .
Append the following line
...
in /etc/modprobe.conf:
Code Block theme Eclipse (options vmxnet3 disable_lro=1)
...
Reload the VMXNET3 driver using the following command:
...
Code Block language bash theme Eclipse modprobe -r vmxnet3
Physical Host BIOS Settings
On most servers, these BIOS Settings can improve the overall performance of the host:
- Turn on Hyper-threading in BIOS
- Confirm that the BIOS is set to enable all populated sockets for all cores
- Enable “Turbo Mode” for processors that support it
- Confirm that hardware-assisted virtualization features are enabled in the BIOS
- Disable any other power-saving mode in the BIOS
- Disable any unneeded devices from the BIOS, such as serial and USB ports
- In order to allow ESXi to control CPU power-saving features, set power management in the BIOS to “OS Controlled Mode” or equivalent. Even without planning to use these power-saving features, ESXi provides a convenient way to manage them.
- C-states deeper than C1/C1E (i.e., C3, C6) allow further power savings, though with an increased chance of performance impacts. We recommend, however, enabling all C-states in BIOS, then use ESXi host power management to control their use.
NUMA Settings
NUMA systems are advanced server platforms with more than one system bus. They can harness large numbers of processors in a single system image with superior price to performance ratios. The high latency of accessing remote memory in NUMA (Non-Uniform Memory Access) architecture servers can add a non-trivial amount of latency to application performance.
For best performance of latency-sensitive applications in guest
...
operating systems, all vCPUs should be scheduled on the same NUMA node and all VM memory should fit and be allocated out of the local physical memory attached to that NUMA node.
Processor affinity for vCPUs to be scheduled on specific NUMA nodes, as well as memory affinity for all VM memory to be allocated from those NUMA nodes, can be set using the vSphere Client
...
. Navigate to VM Settings
...
> Options tab > Advanced General
...
> Configuration Parameters
...
.
From here, add entries for
...
numa.nodeAffinity=0, 1,
...
etc.,
...
where 0, 1, etc. are the processor socket numbers
...
Networking Tips
10 Gigabit Network Configurations on VMware vSphere
By default, the SoftNAS VM (on VMware vSphere) ships with the default E1000 virtual NIC adapter and VMware defaults to MTU 1500.
For best performance results above 1 gigabit, follow the steps outlined below:
- Replace the E1000 virtual NIC adapter with a vmxnet3 on the SoftNAS VM.
- Use MTU 9000 instead of MTU 1500 for vSwitch, vmKernel and physical switch configurations. Be sure to configure the network interface in SoftNAS for MTU 9000 also.
- Refer to the MTU 9000 section for more information.
A dedicated VLAN for storage traffic is recommended. For VMware, refer to the Performance Tuning for VMware vSphere section for details.
...
To increase performance throughput and resiliency, use of iSCSI multipathing is recommended by VMware and other vendors.
Since SoftNAS operates in a hypervisor environment, it is possible to configure multi-path operation as follows:
- On the VMware host where the SoftNAS VM runs, install and use multiple physical NIC adapters.
- Assign a dedicated vSwitch for each incoming iSCSI target path (one per physical NIC).
- Assign the SoftNAS VM a dedicated virtual NIC adapter for each incoming iSCSI target path (per vSwitch physical NIC).
- Assign a unique IP address to each corresponding Linux network interface (for each virtual NIC attached to the SoftNAS VM).
- Restart the SoftNAS iSCSI service and verify connectivity from the iSCSI initiator client(s) to each iSCSI target path.
...
.