Kubernetes CSI Driver#

You can use cunoFS on kubernetes by deploying the cunoFS CSI driver onto your cluster.

Introduction#

The cunoFS CSI Driver enables you to use your Amazon S3, Google Cloud and Azure Cloud storage within a Kubernetes cluster using cunoFS.

Interacting with the CSI driver is done through helm, so all helm configuration and deployment options are available as expected.

The helm chart is hosted under oci://registry-1.docker.io/cunofs/cunofs-csi-chart, and any chart configuration is done through the values.yaml file.

You can browse the various helm chart versions on docker hub.

The cunoFS CSI Driver is composed of multiple Kubernetes objects: a CSIDriver, a StatefulSet, a DaemonSet and two Secrets.

Installation#

To install the cunoFS CSI Driver, you need to activate cunoFS by importing a valid cunoFS Professional or Enterprise license and Import Object Storage Credentials to give cunoFS access to your cloud buckets.

You can deploy the chart, activate the license and import your credentials in one command:

helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version>
  --set cunofsLicense.license="$(cat cuno_license.txt)"
  --set credsToImport="{$(cat creds_file.txt),$(cat other_creds_file.txt)}"

You can overview the status of all the resources of the cunoFS CSI Driver by running:

kubectl get -n kube-system all -l app.kubernetes.io/name=cunofs-csi-driver

Activation#

In order to activate the cunoFS CSI Driver, you have to provide a valid cunoFS Professional or Enterprise license (please visit our website for more detail) in the cunofsLicense.license variable of the values.yaml helm file.

You can do this directly while installing the helm chart by adding --set cunofsLicense.license="..." arguments on the CLI, as such:

helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version> --set cunofsLicense.license="$(cat license_file.txt)"

Alternatively, you can also specify your license key manually by downloading the chart and setting the variable in the file directly. Doing this eases the consecutive deployments of the chart:

Warning

Please ensure safe storage of your chart if you choose to store credentials embedded inside of it.

helm pull --untar oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version>

Then, set the cunofsLicense.license variable to your license key:

# values.yaml file
cunofsLicense:
  license: "<your license key>"

Finally, install the local chart by pointing to its directory:

helm install cunofs-csi-chart ./path/to/chart

Importing Object Storage Credentials#

You can import multiple object storage credentials by listing their contents in the credsToImport yaml array in the values.yaml helm file.

You can do it in two different ways:

  • By setting the setting the value of the elements from the CLI during the deployment of the chart

  • By downloading the chart and setting the values in the values.yaml file manually

The cunoFS CSI Driver manages your credentials with cuno creds, so you can use it locally to ensure you have access to all your buckets before deploying them on a cluster.

Please refer to the _user-guide-credentials-management section for more specific information regarding cunoFS credentials.

Helm has two different syntaxes to set elements of an array through the CLI:

  • The element-wise notation: --set credsToImport'[0]'="<credential 1>" --set credsToImport'[1]'="<credential 2>"'

  • The whole-array notation: --set credsToImport="{<credential 1>,<credential 2>}" with each credential separated by a comma ,

Examples#

In the following examples, you have 3 different credentials you want to use: an AWS S3, a Google Storage json-formatted and a Microsoft Azure credential. They are stored in the creds_s3.txt, creds_gs.json and creds_az.txt files respectively.

… note:

The full contents of the files are needed to be presented in each element of the :code:`credsToImport` array and not the path to the file, due to helm security concerns.
Since google cloud credentials can contains characters that interfere with shells and require error-prone quoting, you can specify the contents of the credentials file as-is, or base64 encoded.

Set Through CLI#

Single-element notation#

Note

Be sure to quote the brackets so that your shell doesn’t perform pathname expansion.

Note

Please ensure that your credentials file doesn’t contain characters that interfere with your shell or the helm --set notations. If it does, please base64 encode your file before passing it to the cunoFS CSI driver. You can base64 encode all your credentials that you pass down to helm to ensure you won’t have issues with quoting/shell expansions.

helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version> --set credsToImport'[0]'="$(cat creds_s3.txt)" --set credsToImport'[1]'="$(cat creds_gs.json | base64)" --set credsToImport'[2]'="$(cat creds_az.txt)"

Whole-array notation#

helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version> --set credsToImport="{$(cat creds_s3.txt),$(cat creds_gs.json | base64),$(cat creds_az.txt)}"

Set Manually#

Alternatively, you can set credentials in the chart directly by downloading the chart and modifying the values of values.yaml:

Warning

Please ensure safe storage of your chart if you choose to store credentials embedded inside of it.

helm pull --untar oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version>

Then, set the elements of the credsToImport array to your credentials:

# values.yaml
credsToImport:
  - "<your first credential here>"
  - "<your second credential here>"
  - "<etc...>"

Finally, install the chart by pointing to its local directory:

helm install cunofs-csi-chart ./path/to/chart

Allocating Storage To A Cluster#

By default, storage in K8s is ephemeral, meaning that it does not persist between reboots, upgrades and redeployments of the cluster, or even restarts of the underlying containers. This lets K8s scale, replicate, upgrade and heal itself much more easily, since there is no state to keep. However, many applications still require persistent storage. To that end, K8s offers storage allocation which works through a abstractions called PersistentVolume (PV for short) and PersistentVolumeClaim (PVC for short). The cluster administrator will create PVs, which will bind to exactly one PVC generated by the cluster user when they deploy their application. These are K8s objects that have a different lifecycle than the container they are bound to. That PVC can then be mounted as a volume to another K8s object, such as a Pod, a Deployment, etc… that needs persistent storage to a predefined path.

In order to use any storage through cunoFS, you first need to deploy the cunoFS CSI Driver onto your cluster. Then, you need to decide on your method of allocating storage, statically or dynamically.

When allocating storage statically, the cluster administrator needs to manually define each PV. This mode works well when you know exactly how much data your cluster needs, you want to keep a lot of control over the storage and you know you will not be scaling it soon.

You can create a PV as such:

Note

The options in spec.csi.volumeAttributes are required to be quoted. If you omit the option, it’s presumed to be false, and it’s only set if the string is evaluated to “true”.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: cunofs-pv
spec:
  capacity:
    storage: 16Ei # ignored but required
  accessModes:
    - ReadWriteOncePod # Currently only support "ReadWriteOncePod"
  csi:
    driver: cunofs.csi.com # required
    volumeHandle: cunofs-csi-driver-volume
    volumeAttributes:
      root: "/cuno/s3/bucket/subdirectory/other_subdirectory" # optional
      posix: "true" # optional
      allow_root: "true" # optional
      allow_other: "true" # optional
      auto_restart: "true" # optional
      readonly: "true" # optional

Then, you can make a PVC with:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "" # ensures that no dynamic provisioning occurs
  resources:
    requests:
      storage: 16Ei # ignored but required
  volumeName: cunofs-pv # PV metadata.name

Finally, bind your resource to the PVC by mounting it as such:

apiVersion: v1
kind: Pod
metadata:
  name: consumer-pod
spec:
  containers:
    - name: cunofs-app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: cunofs-pvc # PVC metadata.name

You can also specify a template once for Kubernetes to automatically generate a PV every time your cluster sees a PVC.

You can do it by deploying a StorageClass object:

apiVersion: storage.K8s.io/v1
kind: StorageClass
metadata:
  name: cunofs-storageclass
provisioner: cunofs.csi.com
  reclaimPolicy: Retain # default is Delete
  parameters:
    cloud-type: s3 # requires either of s3/az/gs
    bucket: cuno-csi-testing # requires bucket that already exists
    bucket-subdir: test_kubernetes # optional
    # Options passed down to the PV:
    posix: "true" # optional
    allow_root: "true" # optional
    allow_other: "true" # optional
    auto_restart: "true" # optional
    readonly: "true" # optional

After deploying the StorageClass, you can allocate storage by creating a PVC and referencing it in spec.storageClassName:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "cunofs-storageclass" # StorageClass metadata.name
  resources:
    requests:
      storage: 16Ei # ignored but required

Kubernetes will call the cunoFS CSI Driver to create a PV, which will bind to the PVC. You can now attach the PVC to a K8s object.

Note

A generic inline volume is not the same as a CSI inline volume. The former implicitly creates a PVC that binds to a StorageClass dynamically, and works with any CSI Driver that allows dynamic storage allocation including the cunoFS CSI Driver. The latter implicitly creates a PV that is tied to the lifetime of a Pod, and requires explicit support from each driver. Currently, the cunoFS CSI Driver does not support CSI inline volumes.

Since the lifetime of a dynamically allocated PV is often tied to another K8s object, such as a Pod, there is an alternative syntax for defining PVCs. You can define the volume at the mount point of the K8s object, which in this example, is in a Pod:

apiVersion: v1
kind: Pod
metadata:
  name: consumer-pod-dyn-inline
spec:
  containers:
    - name: cunofs-app-inline
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container, inline volume!' >> /data/generic-inline-k8s_$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: inline-cuno-storage
          mountPath: /data
  volumes:
    - name: inline-cuno-storage
      ephemeral:
        volumeClaimTemplate:
          metadata:
            labels:
              type: my-inline-volume
          spec:
            accessModes: [ "ReadWriteOncePod" ]
            storageClassName: cunofs-storageclass # StorageClass metadata.name
            resources:
              requests:
                storage: 16Ei # ignored but required

The options are the same as specifying an explicit PVC.

Configuration#

You can pass options to the cunoFS CSI Driver in 3 different ways:

  • the chart’s values.yaml file (directly in the file or through the CLI with --set)

  • through the PersistentVolume when statically allocating storage

  • through the StorageClass when dynamically allocating storage

You can configure all of the chart variables before or while deploying it through the values.yaml file. You can either download the chart and manually change each yaml variable, or you can --set them through the cli arguments while installing the helm chart.

values.yaml options#

Yaml Value

Description

Default Value

namespace

Specifies where the CSI will be deployed

kube-system

cunofsCSIimage.pullPolicy

Specifies how the docker image is deployed onto the Node and Controller. Only useful to change if self-hosting the docker image

Always

cunofsCSIimage.name

Specifies the cunoFS csi docker image. Only useful to change if self-hosting the docker image

docker.io/cunofs/cunofs_csi:latest

cunofsLicense.license

The license used for activating cunoFS on the Driver. It needs to be a valid Professional or Enterprise license

<empty>

credsToImport

Yaml array that you can populate with your s3/az/gs credential files

<emtpy>

You can set options for static storage allocation through the PersistentVolume K8s object.

PersistentVolume options#

Yaml Value

Description

metadata.name

Can be any name as long as it’s unique

spec.capacity.storage

This value is ignored, but it needs to be set

spec.csi.driver

It’s required to set this to “cunofs.csi.com”

spec.csi.volumeHandle

Name of the volume, needs to be unique

spec.accessModes

We currently only support “ReadWriteOncePod”

spec.csi.volumeAttributes.root

This is the cloud URI tht will be mounted to the target mountpath. If not specified, you can access s3, az and gz through the target + ‘/az’ or ‘/gs’ or ‘/s3’ directories

spec.csi.volumeAttributes.posix

Set it to “true” to enforce strict posix mode for cunoFS

spec.csi.volumeAttributes.allow_root:

Set it to “true” to allow the root user to use the mount

spec.csi.volumeAttributes.allow_other:

Set it to “true” to allow all users to use the mount

spec.csi.volumeAttributes.auto_restart:

Set it to “true” to automatically restart cunoFS if an error occurs

spec.csi.volumeAttributes.readonly:

Set it to “true” to mount the volume as read only

You can set options for dynamic storage allocation through the StorageClass K8s object.

StorageClass options#

Yaml Value

Description

metadata.name

Can be any name as long as it’s unique

provisioner

Needs to be “cunofs.csi.com”

reclaimPolicy

“Retain” will not delete the generated PVs and their storage when the PVCs go out of scope, “Delete” will

parameters.cloud-type

Can be “s3”, “az” or “gs”

parameters.bucket

The bucket used to create volumes

parameters.bucket-subdir

Optional. The subdirectory of the bucket where the PVCs will get generated. Can be nested subdirectories like “dir/other_dir/yet_another_dir”

parameters.posix, parameters.allow_root, parameters.allow_other, parameters.auto_restart, parameters.readonly

These options will be passed down to the generated PV and behave the same way as described in the PersistentVolume options

Updating#

You can upgrade the CSI to a newer version using:

helm upgrade --reuse-values cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <new_version>

Uninstallation#

You can uninstall the CSI with:

helm uninstall cunofs-csi-chart

Technical Details#

The cunoFS CSI Driver abides by the Kubernetes Container Storage Interace standard.

It implements the Node, Controller and Identity plugins and uses sidecar containers for simplifying its deployment and maintenance.

The CSI Driver is shipped into one binary, which can act as the Node or the Controller depending on the context (how it’s deployed and which sidecar containers are connected). The helm chart deploys docker containers that have this binary preinstalled. The Node plugin refers to the ability to mount and organise existing PersistentVolumes on a Kubernetes node. The Controller plugin implements the ability to create PersistentVolumes dynamically through a StorageClass.

The Node and the Controller need to handle logic at different levels:

  • The Node plugin needs to be deployed on every K8s Node, since it handles mounting logic that’s specific to each machine on which the application containers run. Therefore, it is deployed via a K8s DaemonSet. Additionally, these sidecar containers are shipped with the Node:

    Liveness Probe

    This sidecar container ensures that the driver remains responsive, and replaces the driver container on crash.

    Node Driver Registrar

    This sidecar container registers the driver to the kubelet to simplify its discovery and communication.

  • The Controller plugin needs to be unique across a Kubernetes cluster, since it handles the lifecycle of PersistentVolumes, which are K8s global objects. It is therefore managed through a K8s StatefulSet:

    Liveness Probe

    This sidecar container, like with the Node plugin, ensures that the driver remains responsive, and replaces the driver container on crash.

    External Provisioner

    This sidecar container helps the driver interacting with the K8s API by listening to volume provisioning-related calls.

During the deployment, the cunoFS CSI Driver deploys the cunoFS license and cloud credentials as Secrets. The license Secret is imported by the Node and the Controller through an environment variable. The credential Secret is mounted to the Node and the Controller through a Projected Volume and sequentially imported by cunoFS.

Examples#

All of the examples below expect the user to have already deployed the cunoFS CSI Driver onto a K8s cluster, to have created the buckets, and to have provided the CSI with valid credentials to them.

It is important to mention that the cunoFS CSI Driver cannot currently create buckets.

The user can deploy a yaml file onto a K8s cluster with:

kubectl apply -f ./path/to/file.yaml

Likewise, a user can delete the deployed resources with:

kubectl delete -f ./path/to/file.yaml

For more information, please refer to the kubectl documentation

In this example, a user wants to access a specific bucket, cuno-csi-testing, hosted on an S3-compatible service.

The cluster administrator deploys the PV, referencing the bucket through the spec.csi.volumeAttribute.root variable with the chosen mount options.

The cluster user deploys a PVC that references the PV; the PVCs spec.volumeName is the same as the PVs metadata.name.

Then, the cluster user deploys the application Pod, and binds the PVC to a volume: the Pods spec.volumes[0].persistentVolumeClaim.claimName references the PVCs metadata.name.

Finally, the volume is mounted to the container’s /data path.

The Pod can now write to the /data directory, which will be connected to the bucket.

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cunofs-pv
spec:
  capacity:
    storage: 16Ei
  accessModes:
    - ReadWriteOncePod
  csi:
    driver: cunofs.csi.com
    volumeHandle: cunofs-csi-driver-volume-1
    volumeAttributes:
      root: /cuno/s3/cuno-csi-testing
      posix: "true"
      allow_root: "true"
      allow_other: "true"
      auto_restart: "false"
      readonly: "false"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "" # disables static provisioning
  resources:
    requests:
      storage: 16Ei # ignored, required
  volumeName: cunofs-pv
---
apiVersion: v1
kind: Pod
metadata:
  name: consumer-pod-static-1
spec:
  containers:
    - name: cunofs-app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' >> /data/k8s_$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: cunofs-pvc

In this example, a user wants to access several buckets, cuno-csi-testing-1 and cuno-csi-testing-2, also hosted on an S3-compatible service.

It is of course possible to declare a PV for each of those buckets and mount them individually, however, there’s a second possible approach.

The cluster administrator can declare a PV with an empty spec.csi.volumeAttributes.root. This means that the mountpoint of the PV will have access to three subdirectories, s3, gs and az. The user can access all of the imported buckets through those directories, e.g. an imported testing-bucket on Azure will be accessible through <mountpoint>/az/testing-bucket.

In our example, we want a Pod to have access to the cuno-csi-testing-1 and cuno-csi-testing-2 buckets on s3. Since the cluster user mounted the rootless PV to the /data directory on the application Pod, writing to the /data/s3/cuno-csi-testing-1/ and /data/s3/cuno-csi-testing-2/ directories will write to their respective buckets.

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cunofs-pv
spec:
  capacity:
    storage: 16Ei
  accessModes:
    - ReadWriteOncePod
  csi:
    driver: cunofs.csi.com
    volumeHandle: cunofs-csi-driver-volume-2
    volumeAttributes:
      root: ""
      posix: "true"
      allow_root: "true"
      allow_other: "true"
      auto_restart: "false"
      readonly: "false"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "" # disables static provisioning
  resources:
    requests:
      storage: 16Ei # ignored, required
  volumeName: cunofs-pv
---
apiVersion: v1
kind: Pod
metadata:
  name: consumer-pod-static
spec:
  containers:
    - name: cunofs-app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' >> /data/s3/cuno-csi-testing-1/k8s_$(date -u).txt; "echo 'Hello from the container!' >> /data/s3/cuno-csi-testing-2/k8s_$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: cunofs-pvc

In this example, the cluster administrator sets up a StorageClass, which can be used to allocate storage to multiple PVCs.

In the StorageClass, we define a cloud-type, bucket, and optionally a bucket-subdir, and for each PVC that the cluster users create that references it, the CSI Driver will create a new unique subdirectory in the bucket and generate a PV the refers to it.

The cluster user simply has to declare a PVC which refers to the StorageClass metadata.name in its spec.storageClassName, and bind the PVC to the application Pod as the previous examples.

Note

By default, a StorageClass will delete the generated PVs and therefore their data once they stop being used. If this is not the expected behaviour, you can set the reclaimPolicy to be Retain, which tells K8s not to delete the PV.

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cunofs-storageclass
provisioner: cunofs.csi.com
reclaimPolicy: Retain # default is Delete
parameters:
  cloud-type: s3 # required
  bucket: cuno-csi-testing # required
  bucket-subdir: test_kubernetes # optional
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc-1
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "cunofs-storageclass" # required for static provisioning
  resources:
    requests:
      storage: 16Ei # ignored, required
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cunofs-pvc-2
spec:
  accessModes:
    - ReadWriteOncePod
  storageClassName: "cunofs-storageclass" # required for static provisioning
  resources:
    requests:
      storage: 16Ei # ignored, required
---
apiVersion: v1
kind: Pod
metadata:
  name: consumer-pod-dyn
spec:
  containers:
    - name: cunofs-app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' >> /data-1/dynamic-k8s_$(date -u).txt; echo 'Hello from the container!' >> /data-2/dynamic-k8s_$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: dynamic-cuno-storage-1
          mountPath: /data-1
        - name: dynamic-cuno-storage-2
          mountPath: /data-2
  volumes:
    - name: dynamic-cuno-storage-1
      persistentVolumeClaim:
        claimName: cunofs-pvc-1
    - name: dynamic-cuno-storage-2
      persistentVolumeClaim:
        claimName: cunofs-pvc-2