Setting Up SnapReplicate and SNAP HA™

Setting up SNAPReplicate™ provides replication of data between SoftNAS® instances for greater redundancy.  SNAP HATM, on the other hand, adds an additional layer of protection by providing load balancing between SoftNAS® instances.

For in-depth information on SoftNAS High Availability functions, consult the SoftNAS High Availability Guide.

Setting Up For SnapReplicate™

The following is required for a standard SoftNAS SNAPReplicate™ and SNAP HATM implementation using Virtual IP addresses:

  • Create virtual network (create separate private subnets.)
  • Deploy 2 instances into the private subnets (into different regions for greater redundancy).
  • Configure SNAPReplicate™ and SNAP HATM using SoftNAS StorageCenter.


What is covered in this document:


An AWS Instance is used on this page to assist in explanation of SnapReplicate and SNAP HA.  If step-by-step set up is needed for the Azure or VMware platform, please refer to the appropriate pages.

Configuring StorageCenter

Once the StorageCenter interface has been accessed, set up the Disk DevicesStorage Pools, and Volumes that will be required for HA.

When setting up storage pools for replication, they have to have the same name. Otherwise, replication will not work properly. Also, create a volume on the source-side node.
 For any high-availability solution involving the transition and synchronization of data between two nodes, there is some risk of limited data-loss at the moment of failure. This potential loss is mitigated by caching and synchronization options made available by the underlying system in use, or added by the vendor. Buurst's implementation of ZFS is no exception to this general rule. ZFS offers inherent options to either prevent data loss or improve performance with increased risk. If using the default settings upon creation of your pools and volumes, SoftNAS' implementation balances the concerns of data loss and performance. If data retention is your primary concern, we recommend changing the Sync mode setting when creating your pools to 'always'. For more information about sync mode settings and the options available, see Working with Storage Pools.
When selecting storage for replication, remember that for throughput to be as consistent and reliable as possible, a like to like storage configuration is optimal. Replicating data from a high performance EBS storage volume to a lesser performance object storage volume, for example, can create bottlenecks that can lead to potential data loss.

SnapReplicate

Preparing the SnapReplicate™ Environment

The first step in preparing a SnapReplicate™ deployment is to install and configure two SoftNAS® controller nodes. Each node should be configured with a common set of storage pools with the same pool names.

Only storage pools with the same name will participate in SnapReplicate™. Pools with distinct names on each node will not be replicated.

For best results, it is recommended (but not required) that pools on both nodes be configured identically (or at least with approximately the same amount of available total storage in each pool).

In the example, we have a storage pool named naspool1 on both the nodes, along with three volumes: vol01, vol02 and websites. In such cases, the SnapReplicate™ will automatically discover the common pool named naspool1 on both nodes, along with the source pool's three volumes, and will auto-configure the pool and its volumes for replication. This means you do not have to create duplicate volumes (vol01, vol02, and websites) on the replication target side, as SnapReplicate™ will perform this action.

Other important considerations for the SnapReplicate™ environment include:

  • Network path between the nodes
  • NAT and firewall paths between the nodes (open port 22 for SSH between the nodes)
  • Network bandwidth available and whether to configure throttling to limit replication bandwidth consumption

SnapReplicate™ creates a secure, two-way SSH tunnel between the nodes. Unique 2048-bit RSA public/private keys are generated on each node as part of the initial setup. These keys are unique to each node and provide secure, authenticated access control between the nodes. Password-based SSH logins are disabled and not permitted (by default) between two SoftNAS nodes configured with SnapReplicate™. Only PKI certificate-based authentication is allowed, and only from known hosts with pre-approved source IP addresses; i.e., the two SnapReplicate™ nodes (and the configured administrator on Amazon EC2).
After initial setup, SSH is used for command and control. SSH is also used (by default) as a secure data transport for authenticated, encrypted data transmission between the nodes.

Establishing a SnapReplicate™ Relationship

Be prepared with the IP address (or DNS name) of the target controller node, along with the SoftNAS StorageCenter login credentials for that node.

To establish the secure SnapReplicate™ relationship between two SoftNAS® nodes, simply follow the steps given below:

  • Log into the source controller's SoftNAS StorageCenter administrator interface using a web browser.

  • In the Left Navigation Pane, select the SnapReplicate™ option.
    The SnapReplicate™ page will be displayed. 

  • Click the Add Replication button in the Replication Control Panel.

The Add Replication wizard will be displayed. 

  • Read the instructions on the screen and then click the Next button. 

  • In the next step, enter the IP address or DNS name of the remote, target SoftNAS® controller node in the Hostname or IP Address text entry box. Note that by specifying the replication target's IP address, you are specifying the network path the SnapReplicate™ traffic will take.

To connect the nodes, the source node must be able to connect via HTTPS to the target node (similar to how the browser user logs into StorageCenter using HTTPS). HTTPS is used to create the initial SnapReplicate™ configuration.

Next, several SSH sessions are established to ensure two-way communications between the nodes is possible.

SSH is the default protocol that is used for SnapReplicate™ for replication and command/control.


To create a SnapReplicate™ relationship between two EC2 nodes, the source node must be able to connect via HTTPS to the target node (similar to how the browser user logs into StorageCenter using HTTPS). HTTPS is used to create the initial SnapReplicate™ configuration.

Next, several SSH sessions are established to ensure two-way communications between the nodes is possible. SSH is the default protocol that is used for SnapReplicate™ for replication and command/control. When connecting two Amazon EC2 nodes, use the internal instance IP addresses (not the the human allocated virtual IP outside the CIDR range mentioned above, or the Elastic IP, which is a public IP). That's because the traffic gets routed internally by default between instances in EC2 by default. Be sure to put the internal IP addresses of both EC2 instances in the Security Group to enable both HTTPS and SSH communications between the two nodes.

To view the internal IP address of each node, from the EC2 console, select Instances, then select the instance - the Private IPs entry shows the instance's private IP address used for SnapReplicate™.

For example:

Node 1 - Virginia, East (zone 1-a) Private IP: 10.120.1.100 (initial source node)

Node 2: Virginia, East (zone 1-b) Private IP: 10.39.270.23 (initial target node)

Add the following Security Group entries:

Type

Security Group Entry

SSH

10.120.1.100/32

SSH

10.39.270.23/32

HTTPS

10.120.1.100/32

HTTPS

10.39.270.23/32

VMware: Similarly, it is important to understand the local network topology and the IP addresses that will be used - internal vs. public IP addresses when connecting the nodes. ALWAYS USE THE INTERNAL/PRIVATE IP ADDRESS.

  • Click the Next button to proceed.

  • Enter the administrator's email id for the target node in the Remote Admin User ID text entry box. 
  • Enter the administrator's password for the target node in the Remote Admin Password text entry box. 
  • Re-enter the administrator's password for the target node to confirm the same, in the Verify Admin Password text entry box. 
  • Click the Next button.

  • The IP address/DNS name and login credentials of the target node will be verified. If there is a problem, an error message will be displayed. Click the Previous button to make the necessary corrections and then click the Next button to continue. 

Setting Up SNAP HA™

  • From the SoftNAS SnapReplicate panel, click on Add SNAP HA™.
  • Click Next on the Welcome screen.

  • If deploying HA for an Azure pairing, or from an Azure virtual machine to AWS or VMware, Azure credentials will be required. These will auto-populate based on configuration choices made during virtual machine setup. Click next to continue.

  • Add the Virtual IPs of both the primary and secondary instances when prompted by the SnapReplicate interface. When creating your Virtual IP, be sure that the IP chosen lies outside the chosen CIDR block selected for the two replication nodes.

  • Provide the administrator credentials if prompted. However, in most cases, your IAM policy will handle this.

  • Next, we can fine tune our HA deployment.
    1. You can determine the max number of retries before your virtual machine fails over.
    2. You can determine the max time (in seconds) that storage can be unavailable before a failover is triggered.
    3. You can also set a default for max ioping request time, to ensure that a failover is triggered more quickly in event of failure.
    4. Finally, you can determine the behavior of the failed node during a failover. 
      1. Reboot - this is the default option, allowing for quicker recovery and re-establishment of high availability, as the failed node will reboot, and SNAP HA will be reactivated, with the original node set as secondary.
      2. Shutdown - The failed node will remain shut down. You will need to reboot the instance manually to re-establish high availability. 
      3. None (No action taken) - This option is only for debug or support use. The failed node will remain in its current state.

  • Click on Finish.

At this point SoftNAS® will do all of the heavy lifting that is required to establish HA, without the need for any user intervention. The process may take several minutes. After completion, the High Availability SoftNAS® pair has been successfully set up across Availability Zones.

SoftNAS strongly recommends to further safeguard against data loss without compromising performance by creating a write log, or ZIL.  For instructions on how to configure a ZIL/write log, see Configuring Read Cache and Write Log