Symptoms
You have an existing SoftNAS instance set up, but you need to migrate it to another VPC in the same region. Reasons for this may include:
- Your current version of SoftNAS is out-of-date, and cannot be upgraded without migration (your instance is a version of SoftNAS older than 3.3.3)
- You are creating a new production environment from a staging environment.
Purpose
This knowlege base article covers the basic steps and key considerations when migrating a SoftNAS VPC to another VPC within the same region.
Resolution
Planning Stage:
Before you begin the migration process, you must be sure to collect the following information, in order to be sure the migration succeeds, and so that you can adjust the new VPC's settings to more closely match the original. In this portion, we will provide both the GUI and the Shell commands to obtain the desired information.
Assume all shell commands are root. Because of this, a good first step is to run the command sudo -i
so that you do not have to run the subsequent commands with the sudo prefix.
- First, collect the licensing information. Licensing can be found in the Storage Administration Pane, under Settings.
Next, collect the size of each volume, the volume name, and the name of the pool to which it belongs. This can all be found under Volumes and LUNS, in the Storage Administration pane. Alternatively, in the shell, you can run the command
zfs list
.
- In Storage Pools (found directly under Volumes and LUNS), you will need to collect the SoftNAS Pool name, and the names of the disk devices in the pool. The above information can also be retrieved in the shell, by running the command
zpool list
. - Once you have the name of the Disk Devices associated with the pools to be migrated, go to Disk Devices, under the Storage Administration pane.
If the disk is S3, provide:- the S3 Bucket Name, dev name, region, and size
- /dev/xvdX, size, and Volume ID from console
- the S3 Bucket Name, dev name, region, and size
- To obtain the above disk information via the shell run the following commands:
mount
lsblk -l
df -h
- Next, if you have any NFS, CIFS, or iSCSI shares on your VPC, you will need to copy the associated information.
- To copy your NFS data, navigate to NFS Exports, under Storage Administration. Alternatively, run the command:
cat /etc/exports
- For CIFS/SAMBA configuration settings, navigate to CIFS Shares, also in the Storage Administration pane. In CIFS Shares, under Global Configuration, click Edit Config.
Or, in the command shell, run:cat /etc/samba/smb.conf
- For iSCSI, copying the configuration settings cannot be done via the SoftNAS User Interface. If you have iSCSI shares set up, it is important to copy the configurations to the new VPC. To copy the data, run:
scp /etc/target/saveconfig.json
and save it to the desired location. For example, if you want to save it to the local server, add the argument:local-server-ip:/file/dir/save/
To save to a local machine, run the following:sudo cp /etc/target/saveconfig.json /tmp
sudo chown ec2-user /tmp/saveconfig.json
scp -i key.pem ec2-user@<softnas IP>:/tmp/saveconfig.json
- To copy your NFS data, navigate to NFS Exports, under Storage Administration. Alternatively, run the command:
Performing the Migration via the SoftNAS console
Note: If performing a migration of a BYOL instance, you will need new BYOL keys for the new virtual machines. Please contact Buurst Sales for new licenses.
- The first consideration when migrating a VPC within the same region (or migrating a VPC at all) is to ensure that there are no active NFS/CIFS/AFP/iSCSI Read/Write operations being performed on the SoftNAS VPC.
- When you are certain that no Read/Write operations are running on the SoftNAS instance, stop the instance via the AWS Console (this will ensure proper handling of EBS volumes).
- Once the instance is stopped, launch a new SoftNAS instance with the same Size, Settings, and VPC configurations as your current VPC. For guidance on launching a new SoftNAS instance, see Create and Configure an instance in AWS.
NOTE: You will not need to add new EBS disks! - Next, navigate to the Volumes Console in AWS. Select all of the attached volumes from the original SoftNAS instance.
- Detach the volumes, and attach to the new SoftNAS instance. For additional guidance on this process, see our Managing Volumes documentation.
- Log into the new SoftNAS instance, and navigate to Storage Pools, under the Storage Administration pane.
- Select the Import wizard. Be sure to provide the storage pool the same name as the original SoftNAS instance.
Important: Check the Force Import box, or you will see an error.
Your new SoftNAS instance now has the same pools, volumes, and disks as the original.
Performing the Migration using the command shell:
- Be sure to unmount all client connections to the source SoftNAS instance prior to the planned downtime.
- Launch the new (target) SoftNAS Instance from the Market Place, with the appropriate license size , if you have not already done so.
- SSH into the new (target) SoftNAS instance to confirm SSH access.
- For guidance on how to connect to your Linux instance, click here.
- For Guidance on how to connect to your Windows Instance, click here.
- Next, temporarily allow password authentication to the node via the following command.
sudo <text editor> /etc/ssh/sshd_config
- Find the line that says 'PasswordAuthentiction no' and change it to 'PasswordAuthentication yes'
- Save the file with whatever text editor you are using.
sudo service sshd restart
- Log in to the new SoftNAS Instance UI (via the IP).
In the Storage Administration pane, expand Settings, and select Software Update.
If an update is available, run it. - SSH into your current SoftNAS and stop all share services:
sudo service nfs stop
sudo service sernet-samba-smbd stop
sudo service sernet-samba-nmbd stop
sudo service sernet-samba-winbindd stop
sudo service fcoe-target stop
- Generate a list of all current pools:
sudo zpool list
Copy the above output as a reference of all current pool names.
- Generate a list of all current volumes:
sudo zfs list
Copy the output of the above command as a reference of all current volume names. - Run the following command to export pools:
zpool export <pool name from step 10>
- Copy NFS, CIFS/SAMBA and/or iSCSI configuration files to the target instance.
scp /etc/exports
softnas@newinstanceip:~
scp /etc/target/saveconfig.json softnas@newinstanceip:~
scp /etc/samba/smb.conf softnas@newinstanceip:~scp /var/lib/samba/*.tdb
softnas@newinstanceip:~
- SSH into the target instance, and as the default softnas user run the following:
sudo cp ~/exports /etc/exports
sudo cp ~/saveconfig.json /etc/target/saveconfig.json
sudo cp ~/smb.conf /etc/samba/smb.confsudo cp ~/*.tdb /var/lib/samba/
- Stop both SoftNAS nodes, both current and new.
- In the AWS EC2 Console, select the source SoftNAS instance and copy the instance ID.
- In the AWS EC2 Console, select the Volumes Console from the left hand menu. In the TAGS search bar, search for the current instance ID.
- Name the volumes (excluding the root volume) according to the listed /dev/sdX name (where sdX is the drive device name, ie., /dev/sdf, /dev/sdg, etc.)
- Once named and confirmed with the EC2 information from the AWS console, detach the volumes from the current node by selecting the volumes, right-clicking, and selecting 'Detach Volume'.
- Once detached, attach each volume to the new SoftNAS node. Be careful to attach to the dev/sdX that the volume is named after.
- Once the volumes have been re-attached to the new node, start the new SoftNAS node.
- SSH into the new (target) SoftNAS node. Run the following command:
sudo zpool import
- This command will list the importable pools. Import each pool and check the volumes associated.
To import:sudo zpool import <pool name>
To check:sudo zpool list
sudo zfs list
- Next, we will need to update ZFS to the current version. To see the volumes requiring an update, run the command:
sudo zfs upgrade
sudo zfs upgrade <pool name>
- Start services on the New SoftNAS Node.
sudo service sernet-samba-smbd start
sudo
service
sernet-samba-nmbd startsudo
service
sernet-samba-winbindd startsudo
service
fcoe-target restartsudo
service
nfs start - Optionally, you may also wish to copy over the original instance's snapshots and snapshot schedules, to ensure continuity. To do so:
scp /var/www/softnas/config/snapshots.ini
scp /var/www/softnas/config/schedules.ini
Copy them to the same location on the target instance.
NFSv4 Update Instructions
To update to NFSv4, the following steps will need to be performed:
SSH into the new (target) SoftNAS Instance IP. Run the following commands:
sudo cp ./exports /etc/exports
sudo <text editor> /etc/exports
/exports (ro,fsid=0)
Note: The '*' above can be changed to an IP of each server for more security. Add a line for each IP in place of '*'.
Note: For each /<pool>/<vol> entry, copy the line and append '/exports'
sudo <text editor> /etc/fstab
/<pool>/<vol> /exports/<pool>/<vol> bind bind 0 0
Note: For each of the /exports/<pool>/<vol> in the /etc/exports, add a line like the one above.
Add the correct directories:
sudo mkdir -p /exports/<pool>/<vol>
Note: For each NFSv4 export used in the above steps, add a directory using the above command.
On the new SoftNAS instance, run:
sudo mount -av
sudo service nfsd restart
This will allow for both NFSv3 and NFSv4.
On the client, then mount the directories with:
mount -o nfsvers=X
(where X equals the needed version).
or, alternatively, in the /etc/fstab
options for nfs mount add 'vers=X
' (where X equals the needed version).
Finally, rebuild Snap Replicate/ HA, if applicable. For guidance on this, see our Setting Up Snap Replicate and SNAP HA or our High-Availability Guide.
Additional Information
- Create and Configure an instance in AWS
- Managing Volumes
- Setting Up Snap Replicate and SNAP HA
- High-Availability Guide
See Also:
- Migrate a SoftNAS VPC on AWS to another Region
- Migrate SoftNAS on VMware vSphere
Migrate Microsoft Azure SoftNAS Instances
How to Migrate from an AWS Marketplace-based instance to a BYOL licensing model instance