ZFS CSI Provisoner for SoftNAS

With the ever evolving world of technology, SoftNAS can now be intergraded with kubernets as a storage backend via a Zfs CSI Provisioner using NFS for an even better user experience.

  • This provisioner will create a Daemonset that will run on all worker nodes in a HA configuration.

  • By default this provisioner will be deployed in the default kubernetes namespace.

  • It will run as root in order to be able to manage (create, update, delete) the zfs file systems in SoftNAS.

  • The provisoner will create two storage classes. One named zfs-persistent-storage and zfs-managed-storage respectively. The zfs-managed-storage is the default storage class and as the name implies the retention policy is set to default, meaning when the deployment(s) using it is/are deleted along with the persistent volume(s), the corresponding zfs volume inside SoftNAS will be automatically deleted as well. This is ideal for pods that don’t need data retention. While the zfs-persistent-storage on the other hand will retain the volume when the deployment is deleted and will need to be removed manually inside SoftNAS using the zfs destroy command or from UI.

  • All pvcs are thick provisioned with the requested storage capacity.

  • When the provisioner is deployed without further configuration it will export the pvcs as nfs version3 which can automatically be mounted into Kubernetes pods via persistent storage claims. However, if you’ll like to export it with version4 the the pvc must be added to the /etc/exports file of the Source node if in an HA pair or the single node. The the respective pods mounting the pvcs restarted to pick up the change. Instructions are provided in step #10.

  • When a failover event occur in an HA scenario then pods must be manually restarted to clear the stale file handle often associated with nfs mounts.

 

Method 1 Using helm chart

  1. A k8s cluster.

  2. A kubefcfg file with Admin rights.

  3. A SoftNAS (Single or HA pair).

Step-by-step guide

  1. Create an ssh key file from your local machine using the command below
    # ssh-keygen -t ed25519

  2. Copy the public key ed25519.pub to /root/.ssh/authorized_keys on your HA Pair or single SoftNAS Instance.

  3. Add the Helm repository for the SoftNAS provisioner by running the following command:
    # helm repo add ccremer https://ccremer.github.io/charts

  4. Update the Helm repository to ensure you have the latest charts:
    # helm repo update

  5. download this pre-populated values file

  6. file with your preferred settings:
    yourHostName: Set this to your desired VIP (Virtual IP) address.

    parentDataset: Specify your ZFS pool name.

    policy: By default, it's set to "Delete." You can change it to "Retain" for persistent storage.

    config: Provide SSH configuration details for SoftNAS. Ensure that the IdentityFile points to your private SSH key. You can use an absolute path or ensure the key is in the same directory.
    config: | Host SoftNAS-VIP IdentityFile ~/.ssh/id_ed25519 User root
    knownHosts: - host: Set this to your desired VIP (Virtual IP) address. pubKey: Source node: this is your host ssh key. This can be found in /etc/ssh/ssh_host_ecdsa_key.pub - host: 1.1.1.1 pubKey: Target Node: this is your host ssh key. This can be found in /etc/ssh/ssh_host_ecdsa_key.pub

  7. Install the Helm chart in the default namespace using the modified values.yaml file. Run the following command:
    # helm install kubernetes-zfs-provisioner ccremer/kubernetes-zfs-provisioner --values values.yaml

  8. you can use kubectl or any kubernetes Dashboard out there to verify the zfs_provisioner deployment. in the example below
    kubectl get pods

  9. Add this script to /var/www/softnas/scripts directory. This script will automatically add all PVCs (Persistent Volume Claims) created by the ZFS provisioner to the /etc/exports file, mount them, and reload NFS without disrupting existing connections.
    NOTE: This script will add all pvcs that has been created by the zfs provisioner to the /etc/exports file, mount it and reload nfs without disrupting existing connections

  10. Run the script
    # /var/www/softnas/scripts/monitor_zfs_exports.sh

  11. To test the provisioner, you can use this file to ensure that the provisioner is working correctly.

 

Method 2 Using the Yaml manifest

Prerequisites

  1. A k8s cluster.

  2. A kubefcfg file with Admin rights.

  3. A SoftNAS (Single or HA pair).

Step-by-step guide

  1. Create an ssh key file from your local machine using the command below
    # ssh-keygen -t ed25519

  2. Copy the public key ed25519.pub to /root/.ssh/authorized_keys on your HA Pair or single SoftNAS Instance

  3. Download the zfs-provisioner-manifest.yaml file at the bottom of this page

  4. Edit the file and replace the circled lines from the screenshot below with your actual information. Noted: This provisioner was tested using an HA pair, so a 1.1.1.1 was used as the VIP instead.

     

  5. Run the manifest file
    # kubectl -f zfs-provisioner-manifest.yaml

  6. kubectl get pods

  7. Now you can run a test using the zfs-nginx-deployment.yaml manifest below to ensure that the provisioner is working fine.

  8. From SoftNAS UI you should see a pvc created under Volumes and LUNs

  9. From the SnapHA page window you should be able to see it added to replication cycle as well

     

  10. IMPORTANT: To export a pvc to nfs v4 you can run the commands below
    a. #. zfs list :- To list all the current zfs volumes in the systems
    b. # copy all the pvc that need to be exported as nfs v4
    c. # edit the /etc/exports file and paste them in there
    d. # then run # /var/www/softnas/scripts/mount_nfsv4.sh
    e. # Then restart the nfs server by running systemctl restart nfs-server



     

 

Highlight important information in a panel like this one. To edit this panel's color or style, select one of the options in the menu.