You have a pool created and ready, but you have a need to either migrate this pool to another SoftNAS instance, for example to break up the load, or you have a need to shift data (at the pool level) from one pool to another, such as when changing the underlying EBS volume to improve throughput or even migrate from EBS to S3 or vice versa.
This document will highlight the different methods available to migrate a pool data and offer the pros and cons of each method at a high level. Then we will show you how these tasks are performed.
Migrating a Pool within the same instance
The following are the two primary methods for migrating a pool within the same instance:
EBS Volume Snapshots If using EBS volume Snapshots to migrate data from one pool to another, the benefits are no overhead and a simpler process. However, this method can incur more downtime, and you can only migrate between EBS volume types (no migrations to or from S3 disks).
ZFS Send and Receive Migrating using ZFS send and receive will result in lower downtime, can be performed to and from S3 disks, as well as between EBS volume types. The main drawback of this methodology is that it will incur significant I/O overhead. For this reason, it is recommended to perform this action outside of business hours (or peak business hours if operating 24-7). You will also need to make sure that your license capacity permits such method.
NOTE: For large amount of data ZFS Send and Receive might not be efficient and will cause a huge performance degradation, so you may consider migrating to another instance then importing the resulted pool to the current instance
Migrating a Pool to another SoftNAS instance
The following methods can be used if migrating a pool to another SoftNAS ® Instance:
If in the same Availability Zone:
Zpool Export (and import from the secondary instance UI This is the fastest method, incurs little to no overhead, and does not incur AWS charges, as you are still using the same EBS disks.It is just a matter of detaching then attaching the same disks to another instance in the same Availability Zone. You will need to migrate the services configuration files.
Migrate using SnapReplicate™ SnapReplicate™ is the easiest method and results in less downtime, and the ability to transfer your pool data to a different EBS volume types, or to and from S3. It results in I/O overhead, so should be performed outside of peak business hours. It will take care of the configuration files.
If migrating between different Availability Zones:
EBS Volume Snapshots
EBS Volume Snapshots Method (migrating pools on the same instance or different instance).
Note: If you are in a RAID array setup it is highly recommended to use our Backup & Restore applet from UI to backup your pool(s) which will make it easier for future pool restoration
Stop any write operations on that pool during the snapshot operations.
Create a snapshot for each volume participating in the pool from AWS console based on AWS documentation:
Attach the volume(s) to the instance. But make sure that the mappings of the new volumes ( /dev/sdx ) are the same as the original volumes.
If it is on the same instance you will need to export the old pool first using command line
zpool export pool-name
From the UI go to Storage Pools>Import> and choose the pools and import the volumes.
If it is a different instance, Perform final housekeeping tasks, such as modifying or moving the required sharing config files, ( “/etc/exports /etc/target/saveconfig.json /etc/samba/smb.conf”) starting the needed services, and directing your clients or applications to use the IP address of the new share.
ZFS send & receive (Migrating pools on the same instance)
From the SoftNAS UI create the desired pool with the desired backend volumes.
For guidance on how to connect to your Linux instance, click here.
Create the first pool snapshot using the below command. This step will not require downtime.
zfs snapshot –r source-pool@Full
Send this snapshot to the destination pool, using the below command. This step will cause some performance degradation while transferring the data, so it is recommended to be performed outside of peak business hours.
Now you can export source-pool and remove its disks,then rename the destination-pool as the source pools as below:
zpool export source-pool
zpool import source-pool destination-pool
Now you can bring back your clients/applications up and remount your volumes, OR start SoftNAS storage services again, if they were stopped in step 5. service monit start /var/www/softnas/scripts/start-nasservices.sh
The pool is now migrated, you can return to the SoftNAS user interface and check the results.
In case of sending the full snapshot failed for whatever reason, you can resume it again using the below command:
zfs send -t $(zfs get -H -o value receive_resume_token destination-pool) | zfs receive -s destination-pool
ZFS export (and import from a new SoftNAS® Instance)
SSH into your instance.
For guidance on how to connect to your Linux instance, click here..
From CLI run the following command:
zpool export POOL-NAME
From the AWS console, detach the EBS volumes that the pool is based on.
From AWS console, attach the EBS volumes to the new instance using their exact mappings i.e (/dev/sdx). As mapping the wrong volume name to the wrong /dev/sdx may lead to data corruption during pool import
Log into the new SoftNAS instance using your browser. In the new instance's UI, go to Storage Pools, and click Import, then choose the pool and import it.
Perform final housekeeping tasks, such as modifying or moving the required sharing config files, ( “/etc/exports /etc/target/saveconfig.json /etc/samba/smb.conf”) starting the needed services, and directing your clients or applications to use the IP address of the new share.
SnapReplicate™ (Migrating to a new SoftNAS® Instance)
SnapReplicate™ is the easiest way to migrate your pool to a new instance, whether in a different availability zone, or in the same one. With SnapReplicate, you don't have to worry about cleaning up config files or services.
To use SnapReplicate™, follow the instructions in our documentation: