Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

Preparing the SnapReplicate™ Environment

The first step in preparing a SnapReplicate™ deployment is to install and configure two SoftNAS® controller nodes. Each node should be configured with a common set of storage pools with the same pool names.

Note: Only storage pools with the same name will participate in SnapReplicate™. Pools with distinct names on each node will not be replicated.

For best results, it is recommended (but not required) that pools on both nodes be configured identically (or at least with approximately the same amount of available total storage in each pool).

In the following example, we have a storage pool named naspool1 on both the nodes, along with three volumes: vol01, vol02 and websites. In such cases, the SnapReplicate™ will automatically discover the common pool named naspool1 on both nodes, along with the source pool's three volumes, and will auto-configure the pool and its volumes for replication. This means you do not have to create duplicate volumes (vol01, vol02, and websites) on the replication target side, as SnapReplicate™ will perform this action.

...

Other important considerations for the SnapReplicate™ environment include:

  • Network path between the nodes
  • NAT and firewall paths between the nodes (open port 22 for SSH between the nodes)
  • Network bandwidth available and whether to configure throttling to limit replication bandwidth consumption
Note

...

SnapReplicate™ creates a secure, two-way SSH tunnel between the nodes. Unique 2048-bit RSA public/private keys are generated on each node as part of the initial setup. These keys are unique to each node and provide secure, authenticated access control between the nodes. Password-based SSH logins are disabled and not permitted (by default) between two SoftNAS nodes configured with SnapReplicate™. Only PKI certificate-based authentication is allowed, and only from known hosts with pre-approved source IP addresses; i.e., the two SnapReplicate™ nodes (and the configured administrator on Amazon EC2).

After initial setup, SSH is used for command and control. SSH is also used (by default) as a secure data transport for authenticated, encrypted data transmission between the nodes.

Image Added

Establishing a SnapReplicate™ Relationship

Be prepared with the IP address (or DNS name) of the target controller node, along with the SoftNAS StorageCenter login credentials for that node.

To establish the secure SnapReplicate™ relationship between two SoftNAS® nodes, simply follow the steps given below:

  •  Log into the source controller's SoftNAS StorageCenter administrator interface using a web browser.

...

  •  In the Left Navigation Pane, select the SnapReplicate™ option.
    The SnapReplicate™ page will be displayed. 

...

Image Added

  •  Click the Add Replication button in the Replication Control Panel.

...

Image Added

The Add Replication wizard will be displayed. 

...

  •  Read the instructions on the screen and then click the Next button. 

Image Added

  •  In the next step, enter the IP address or DNS name of the remote, target SoftNAS® controller node in the Hostname or IP Address text entry box. Note that by specifying the replication target's IP address, you are specifying the network path the SnapReplicate™ traffic will take.

...

Note
As of 5.0,  only private HA is supported, using Virtual IPs. A Virtual IP is a HUMAN ALLOCATED IP address outside of the CIDR (Classless Inter-Domain Routing) range. For example, if you have a VPC CIDR range of 10.0.0.0/16, one can use 20.20.20.20. This will then be added to the VPC Route Table, and will be pointed to the ENI device (NIC) of one of the SoftNAS HA Nodes. A private high availability setup is recommended, as it allows you to host your HA setup entirely on an internal network, without a

...

publicly accessible IP. In order to access your high availability EC2 cluster, an outside party would need to access your network directly, via a jumpbox, or VPN, or other solution. This is inherently more secure than a native Elastic IP configuration. Elastic IP configuration is still possible on versions prior to 5.0, but this is not a recommended configuration for any production environment due to the inherent risk of a public IP. 


To connect the nodes, the source node must be able to connect via HTTPS to the target node (similar to how the browser user logs into StorageCenter using HTTPS). HTTPS is used to create the initial SnapReplicate™ configuration. Next, several SSH sessions are established to ensure two-way communications between the nodes is possible. SSH is the default protocol that is used for SnapReplicate™ for replication and command/control.

To create a SnapReplicate™ relationship between two EC2 nodes, the source node must be able to connect via HTTPS to the target node (similar to how the browser user logs into StorageCenter using HTTPS). HTTPS is used to create the initial SnapReplicate™ configuration. Next, several SSH sessions are established to ensure two-way communications between the nodes is possible. SSH is the default protocol that is used for SnapReplicate™ for replication and command/control. When connecting two Amazon EC2 nodes, use the internal instance IP addresses (not the the human allocated virtual IP outside the CIDR range mentioned above, or the Elastic IP, which is a public IP). That's because the traffic gets routed internally by default between instances in EC2 by default. Be sure to put the internal IP addresses of both EC2 instances in the Security Group to enable both HTTPS and SSH communications between the two nodes.

To view the internal IP address of each node, from the EC2 console, select Instances, then select the instance - the Private IPs entry shows the instance's private IP address used for SnapReplicate™.

For example:

  • Node 1 - Virginia, East (zone 1-a) Private IP: 10.120.1.100 (initial source node)
  • Node 2: Virginia, East (zone 1-b) Private IP: 10.39.270.23 (initial target node)



Add the following Security Group entries:

Type

Security Group Entry

SSH

10.120.1.100/32

SSH

10.39.270.23/32

HTTPS

10.120.1.100/32

HTTPS

10.39.270.23/32



VMware: Similarly, it is important to understand the local network topology and the IP addresses that will be used - internal vs. public IP addresses when connecting the nodes. ALWAYS USE THE INTERNAL/PRIVATE IP ADDRESS.

  •  Click the Next button.
    In the next step, provide the target node's admin credentials. 

...

Image Added

  •  Enter the administrator's email id for the target node in the Remote Admin User ID text entry box. 
  •  Enter the administrator's password for the target node in the Remote Admin Password text entry box. 
  •  Re-enter the administrator's password for the target node to confirm the same, in the Verify Admin Password text entry box. 
  •  Click the Next button.
Note
The IP address/DNS name and login credentials of the target node will be verified. If there is a problem, an error message will be displayed.


  •  Click the Previous button to make the necessary corrections and then click the Next button to continue. 

...

Image Added

  •  In the next step, read the final instructions and then click the Finish button.

...

Image Added

The SyncImage compares the storage pools on each controller, looking for pools with the same name. For example, let's say we have a pool named "naspool1" configured on each node. Volume discovery will automatically add all volumes in "naspool1" from the source node to the replication task list.

For each volume added as a SyncImage task, that volume will be created on the target node (if it exists already, it will be deleted and re-created from scratch to ensure an exact replica will be created as a result of SyncImage). The SyncImage then proceeds to create exact replicas of the volumes on the target.


After data from the volumes on the source node is mirrored to the target, once per minute SnapReplicate™ transfers keep the target node hot with data block changes from the source volumes.


The tasks and an event log will be displayed in the SnapReplicate™ Control Panel section. 

...

This indicates that a SnapReplicate™ relationship is established and the replication should be taking place.

Image Added

Replication Granularity

Replication granularity in SoftNAS is handled at the volume level. In order to omit volumes from replication, simply uncheck the box for 'Enable Replication' in the Snapshots tab at volume creation. For more detailed information about managing snapshots and volumes see Snapshots in StorageCenter.

...


Note

...

If you disable replication, then re-enable it (click enable replication again a few minutes later) this will result in a full re-mirror of the volume. It does NOT pause replication and resume it at the same point.


Image Added