Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 4 Next »

Symptoms

You have an existing SoftNAS instance set up, but you need to migrate it to another VPC in the same region. Reasons for this may include:

  • Your current version of SoftNAS is out-of-date, and cannot be upgraded without migration (your instance is a version of SoftNAS older than 3.3.3)
  • You are creating a new production environment from a staging environment.

Purpose

This knowlege base article covers the basic steps and key considerations when migrating a SoftNAS VPC to another VPC within the same region.

Resolution

Planning Stage:

Before you begin the migration process, you must be sure to collect the following information, in order to be sure the migration succeeds, and so that you can adjust the new VPC's settings to more closely match the original. In this portion, we will provide both the GUI and the Shell commands to obtain the desired information. 

Assume all shell commands are root. Because of this, a good first step is to run the command sudo -i so that you do not have to run the subsequent commands with the sudo prefix.

  1. First, collect the licensing information. Licensing can be found in the Storage Administration Pane, under Settings.


     
  2. Next, collect the size of each volume, the volume name, and the name of the pool to which it belongs. This can all be found under Volumes and LUNS, in the Storage Administration pane. Alternatively, in the shell, you can run the command zfs list.


     

  3. In Storage Pools (found directly under Volumes and LUNS),  you will need to collect the SoftNAS Pool name, and the names of the disk devices in the pool.  The above information can also be retrieved in the shell, by running the command zpool list.


  4. Once you have the name of the Disk Devices associated with the pools to be migrated, go to Disk Devices, under the Storage Administration pane.

    If the disk is S3, provide:
    • the S3 Bucket Name, dev name, region, and size

    For EBS, collect the following:

    • /dev/xvdX, size, and Volume ID from console
  5. To obtain the above disk information via the shell run the following commands:

    mount
    lsblk -l
    df -h

  6. Next, if you have any NFS, CIFS, or iSCSI shares on your VPC, you will need to copy the associated information.
    1. To copy your NFS data, navigate to NFS Exports, under Storage Administration. Alternatively, run the command:

      cat /etc/exports
       

    2. For CIFS/SAMBA configuration settings, navigate to CIFS Shares, also in the Storage Administration pane. In CIFS Shares, under Global Configuration, click Edit Config



      Or, in the command shell, run:

      cat /etc/samba/smb.conf

    3. For iSCSI, copying the configuration settings cannot be done via the SoftNAS User Interface. If you have iSCSI shares set up, it is important to copy the configurations to the new VPC. To copy the data, run:

      scp /etc/target/saveconfig.json 

      and save it to the desired location. For example, if you want to save it to the local server, add the argument:

      local-server-ip:/file/dir/save/

      To save to a local machine, run the following: 

      sudo cp /etc/target/saveconfig.json /tmp 
      sudo chown ec2-user /tmp/saveconfig.json

      scp -i key.pem ec2-user@<softnas IP>:/tmp/saveconfig.json


Performing the Migration via the SoftNAS console

  1. The first consideration when migrating a VPC within the same region (or migrating a VPC at all) is to ensure that there are no active NFS/CIFS/AFP/iSCSI Read/Write operations being performed on the SoftNAS VPC. 
  2. When you are certain that no Read/Write operations are running on the SoftNAS instance, stop the instance via the AWS Console (this will ensure proper handling of EBS volumes).

  3. Once the instance is stopped, launch a new SoftNAS instance with the same Size, Settings, and VPC configurations as your current VPC. For guidance on launching a new SoftNAS instance, see Create and Configure an instance in AWS.
    NOTE: You will not need to add new EBS disks!
  4. Next, navigate to the Volumes Console in AWS. Select all of the attached volumes from the original SoftNAS instance.
  5.  Detach the volumes, and attach to the new SoftNAS instance. For additional guidance on this process, see our Managing Volumes documentation.
  6. Log into the new SoftNAS instance, and navigate to Storage Pools, under the Storage Administration pane.

  7. Select the Import wizard. Be sure to provide the storage pool the same name as the original SoftNAS instance.

Important: Check the Force Import box, or you will see an error.

Your new SoftNAS instance now has the same pools, volumes, and disks as the original. 

Performing the Migration using the command shell:


  1. Be sure to unmount all client connections to the source SoftNAS instance prior to the planned downtime.

  2. Launch the new (target) SoftNAS Instance from the Market Place, with the appropriate license size , if you have not already done so.
     
  3. SSH into the new (target) SoftNAS instance to confirm SSH access.

    - For guidance on how to connect to your Linux instance, click here.
    - For Guidance on how to connect to your Windows Instance, click here.
     
  4. Next, temporarily allow password authentication to the node via the following command.

    sudo <text editor> /etc/ssh/sshd_config
     
  5. Find the line that says 'PasswordAuthentiction no' and change it to 'PasswordAuthentication yes'

  6. Save the file with whatever text editor you are using.

    sudo service sshd restart

  7. Log in to the new SoftNAS Instance UI (via the IP).

    In the Storage Administration pane, expand Settings, and select Software Update.

    If an update is available, run it.

  8. SSH into your current SoftNAS and stop all share services:

    sudo service nfs stop
    sudo service sernet-samba-smbd stop
    sudo service sernet-samba-nmbd stop
    sudo service sernet-samba-winbindd stop
    sudo service fcoe-target stop

  9. Generate a list of all current pools:

    sudo zpool list

    Copy the above output as a reference of all current pool names.

  10. Generate a list of all current volumes:

    sudo zfs list

    Copy the output of the above command as a reference of all current volume names. 

  11. Run the following command to export pools: 

    zpool export <pool name from step 10>

  12. Copy NFS, CIFS/SAMBA and/or iSCSI configuration files to the target instance.

    scp /etc/exports softnas@newinstanceip:~
    scp /etc/target/saveconfig.json softnas@newinstanceip:~
    scp /etc/samba/smb.conf softnas@newinstanceip:~
    scp /var/lib/samba/*.tdb softnas@newinstanceip:~

  13. SSH into the target instance, and as the default softnas user run the following:

    sudo cp ~/exports /etc/exports
    sudo cp ~/saveconfig.json /etc/target/saveconfig.json
    sudo cp ~/smb.conf /etc/samba/smb.conf
    sudo cp ~/*.tdb /var/lib/samba/


  14. Stop both SoftNAS nodes, both current and new. 

  15. In the AWS EC2 Console, select the source SoftNAS instance and copy the instance ID.

  16. In the AWS EC2 Console, select the Volumes Console from the left hand menu. In the TAGS search bar, search for the current instance ID.

  17. Name the volumes (excluding the root volume) according to the listed /dev/sdX name (where sdX is the drive device name, ie., /dev/sdf/dev/sdg, etc.)

  18. Once named and confirmed with the EC2 information from the AWS console, detach the volumes from the current node by selecting the volumes, right-clicking, and selecting 'Detach Volume'.

  19. Once detached, attach each volume to the new SoftNAS node. Be careful to attach to the dev/sdX that the volume is named after.

  20. Once the volumes have been re-attached to the new node, start the new SoftNAS node. 

  21. SSH into the new (target) SoftNAS node. Run the following command:

    sudo zpool import

  22. This command will list the importable pools. Import each pool and check the volumes associated. 

    To import:

    sudo zpool import <pool name>

    To check:

    sudo zpool list
    sudo zfs list
  23. Next, we will need to update ZFS to the current version. To see the volumes requiring an update, run the command:

    sudo zfs upgrade
    sudo zfs upgrade <pool name>

  24. Start services on the New SoftNAS Node.

    sudo service sernet-samba-smbd start
    sudo service sernet-samba-nmbd start
    sudo service sernet-samba-winbindd start

    sudo service fcoe-target restart
    sudo service nfs start

  25. Optionally, you may also wish to copy over the original instance's snapshots and snapshot schedules, to ensure continuity. To do so:

    scp /var/www/softnas/config/snapshots.ini
    scp /var/www/softnas/config/schedules.ini

    Copy them to the same location on the target instance.

NFSv4 Update Instructions

To update to NFSv4, the following steps will need to be performed:

SSH into the new (target) SoftNAS Instance IP. Run the following commands:

sudo cp ./exports /etc/exports
sudo <text editor> /etc/exports

/exports (ro,fsid=0)

 Note: The '*' above can be changed to an IP of each server for more security. Add a line for each IP in place of '*'.

 Note: For each /<pool>/<vol> entry, copy the line and append '/exports'

sudo <text editor> /etc/fstab

/<pool>/<vol> /exports/<pool>/<vol> bind bind 0 0

Note: For each of the /exports/<pool>/<vol> in the /etc/exports, add a line like the one above.

Add the correct directories:

sudo mkdir -p /exports/<pool>/<vol>

Note: For each NFSv4 export used in the above steps, add a directory using the above command.

On the new SoftNAS instance, run:

sudo mount -av
sudo service nfsd restart

This will allow for both NFSv3 and NFSv4.

On the client, then mount the directories with:

mount -o nfsvers=X (where X equals the needed version).

or, alternatively, in the /etc/fstab options for nfs mount add 'vers=X' (where X equals the needed version).

Finally, rebuild Snap Replicate/ HA, if applicable. For guidance on this, see our Setting Up Snap Replicate and SNAP HA or our High-Availability Guide.

Additional Information

See Also:





  • No labels