Performance Tuning for VMware vSphere
Achieving peak storage performance in the VMware environment involves tuning the VMware configuration beyond default values. The following are recommended best practices for tuning VMware for use with SoftNAS.
VMDirectPath
- VMDirectPath provides a means of passing a disk controller device directly through to the guest operating system (i.e., CentOS Linux).
- To enable VMDirectPath Configuration page in the vSphere Client:
- Select he ESX Host from Inventory.
- Select the Configuration tab.
- Navigate to Hardware > Advanced Settings.
- Edit and select device (Storage Controller, Physical Nic)
- Select he ESX Host from Inventory.
Intel VT-d (or equivalent) processor feature is required for support of VMDirectPath.
VM SCSI Controller - Set to Paravirtual
- In VMware, change the SCSI controller type to Paravirtual, which provides more efficient access to storage.
Physical NIC Settings
- A host physical NIC can have settings, which can provide better utilization and performance improvement.
- Most 1GbE or 10GbE NICs (Network Interface Cards) support a feature called interrupt moderation or interrupt throttling, which coalesces interrupts from the NIC to the host so that the host does not get overwhelmed and spend too many CPU cycles processing interrupts.
Disable Physical NIC Interrupt Moderation on the ESXi Host
Find the driver using the following ESXi command:
esxcli network nic list*
Find the list of module parameters for the driver used by issuing the following command:
esxcli system module parameters list -m <driver>*
FOR EXAMPLE: This applies to the Intel 10GbE driver called ixgbe:
esxcli system module parameters set -m ixgbe -p "InterruptThrottleRate=0"*
Check the host for SR-IOV support, which provides additional performance and throughput in virtualized systems like VMware.
Adjust Network Heap Size for high network traffic
By default ESX server network stack allocates 64MB of buffers to handle network data.
- Increase buffer allocation from 64MB to 128MB memory to handle more network data.
Change Heap Size on the ESX Host
Navigate to the ESX Server Host > Configuration Tab > Advanced Settings > VMkernel > Boot > VMkernel.Boot.netPktHeapMaxSize.
Virtual NIC Settings
- Configure jumbo frames (MTU 9000) in vSwitch and virtual network adapter (be sure physical switch supports MTU 9000)
We recommend VMXNET 3 virtual NICs.
is by setting the advanced networking performance option (Configuration -Advanced Settings - Net) CoalesceDefaultOn to 0 (disabled).
Disable Virtual Interrupt Coalescing for VMXNET 3 Virtual NIC
- Navigate to the VSphere Client > VM Settings > Options Tab > Advanced General > Configuration Parameters.
- Add an entry for ethernetX.coalescingScheme with the value of disabled (0).
An alternative way to disable virtual interrupt coalescing for all virtual NICs on the host which affects all VMs, not just the latency-sensitive ones, is by navigating to Configuration > Advanced Settings > Net and setting CoalesceDefaultOn to a value of disabled (0)
Disable LRO
- Reload the VMXNET 3 driver in the SoftNAS CentOS operating system.
SSH into the SoftNAS® VM as root and issue the following command:
modprobe -r vmxnet3*
Add the following line in /etc/modprobe.conf:
(options vmxnet3 disable_lro=1)
Reload the driver using the following command:
modprobe vmxnet3*
Physical Host BIOS Settings
On most servers, these BIOS Settings can improve the overall performance of the host:
- Turn on Hyper-threading in BIOs
- Confirm that the BIOS is set to enable all populated sockets for all cores
- Enable "Turbo Mode" for processors that support it
- Confirm that hardware-assisted virtualization features are enabled in the BIOS
- Disable any other power-saving mode in the BIOS
- Disable any unneeded devices from the BIOS, such as serial and USB ports
- In order to allow ESXi to control CPU power-saving features, set power management in the BIOS to "OS Controlled Mode" or equivalent. Even without planning to use these power-saving features, ESXi provides a convenient way to manage them.
- C-states deeper than C1/C1E (i.e., C3, C6) allow further power savings, though with an increased chance of performance impacts. We recommend, however, enabling all C-states in BIOS, then use ESXi host power management to control their use.
NUMA Settings
NUMA systems are advanced server platforms with more than one system bus. They can harness large numbers of processors in a single system image with superior price to performance ratios. The high latency of accessing remote memory in NUMA (Non-Uniform Memory Access) architecture servers can add a non-trivial amount of latency to application performance.
For best performance of latency-sensitive applications in guest OSes, all vCPUs should be scheduled on the same NUMA node and all VM memory should fit and be allocated out of the local physical memory attached to that NUMA node.
- Processor affinity for vCPUs to be scheduled on specific NUMA nodes, as well as memory affinity for all VM memory to be allocated from those NUMA nodes, can be set by navigating to VM Settings > Options Tab > Advanced General > Configuration Parameters.
Add the following entries:
numa.nodeAffinity=<0,1,etc.>
<0,1,etc.> are the processor socket numbers.