Kubernetes CSI Driver#
The cunoFS CSI Driver facilitates seamless integration of your cloud storage services (Amazon S3, Google Cloud, and Azure Cloud) within a Kubernetes cluster. The driver is available through Helm under oci://registry-1.docker.io/cunofs/cunofs-csi-chart. More information can be found on docker hub.
Install#
Ensure that Helm is installed. If not, follow the Helm installation guide
Deploy the cunoFS CSI Driver:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart \
--set cunofsLicense.license="<license-text>" \
--set credsToImport="{<credentials-1>,<credential-2>, ... ,<credentials-N>}"
--set cunofsLicense.license: (required) cunoFS license [more details]--set credsToImport: (optional) cloud credentials [more details]
Display the status of the cunoFS CSI Driver resources:
kubectl get all -l app.kubernetes.io/name=cunofs-csi-driver
Note
For security reasons, helm doesn’t allow access to files via paths.
Therefore, you need to provide the credential file contents in credsToImport, and not the paths.
To ensure that the cloud credentials are passed correctly, please provide them in base64 encoding. For example:
--set credsToImport="{$(cat creds-1.txt | base64), $(cat creds-2.json | base64)}"
Update#
Upgrade to the latest version:
helm upgrade --reuse-values cunofs-csi-chart 'oci://registry-1.docker.io/cunofs/cunofs-csi-chart'
You can append the --version <version> to upgrade to a specific version.
Uninstall#
helm uninstall cunofs-csi-chart
Storage allocation#
The cunoFS CSI Driver support the following strategies:
Static provisioning#
To allocate storage statically, define one or more PV (PersistentVolume) providing the bucket details and options:
PV manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolume
metadata:
name: cunofs-pv
spec:
capacity:
storage: 16Ei # ignored but required
accessModes:
- ReadWriteOncePod # Currently only support "ReadWriteOncePod"
csi:
driver: cunofs.csi.com # required
volumeHandle: cunofs-csi-driver-volume
volumeAttributes:
root: "/cuno/s3/bucket/subdirectory/other_subdirectory" # optional
posix: "true" # optional
allow_root: "false" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC (PersistentVolumeClaim):
PVC manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "" # ensures that no dynamic provisioning occurs
resources:
requests:
storage: 16Ei # ignored but required
volumeName: cunofs-pv # PV metadata.name
Finally, cluster users can mount the PVC:
Pod manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Dynamic provisioning#
To allocate storage dynamically, define a StorageClass providing the bucket details and options:
StorageClass manifest defined by cluster admin#apiVersion: storage.K8s.io/v1
kind: StorageClass
metadata:
name: cunofs-storageclass
provisioner: cunofs.csi.com
reclaimPolicy: Retain # default is Delete
parameters:
cloud-type: s3 # requires either of s3/az/gs
bucket: cuno-csi-testing # requires bucket that already exists
bucket-subdir: test_kubernetes # optional
# Options passed down to the PV:
posix: "true" # optional
allow_root: "false" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC which has a reference to the StorageClass:
PVC manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "cunofs-storageclass" # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Cluster users can mount the PVC similarly to the static allocation case:
Pod manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Alternatively, cluster users can create a generic inline volume which doesn’t require a PVC:
Pod manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod-dyn-inline
spec:
containers:
- name: cunofs-app-inline
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container, inline volume!' >> /data/generic-inline-k8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: inline-cuno-storage
mountPath: /data
volumes:
- name: inline-cuno-storage
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-inline-volume
spec:
accessModes: [ "ReadWriteOncePod" ]
storageClassName: cunofs-storageclass # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Note
The current version of cunoFS CSI Driver does not support CSI inline volumes
Configuration#
This section offers additional details about the configuration options for the cunoFS CSI driver.
Helm chart#
The Helm chart can be also installed, configured and deployed manually.
Download the chart manually:
helm pull --untar oci://registry-1.docker.io/cunofs/cunofs-csi-chart
Set the cunofsLicense.license variable and import the cloud credentials:
# values.yaml file
cunofsLicense:
license: "<your license key>"
credsToImport:
- "<credential-1>"
- "<credential-2>"
- "<..>"
- "<credential-N>"
Finally, install the local chart by pointing to its directory:
helm install cunofs-csi-chart <path-to-chart>
Available options:
Yaml Value |
Description |
Default Value |
|---|---|---|
|
Optionally change the name of the deployed driver. Only useful if you want to deploy several instances of the driver. |
|
|
Specifies how the docker image is deployed onto the Node and Controller. Only useful to change if self-hosting the docker image. |
|
|
Specifies the cunoFS CSI docker image. Only useful to change if self-hosting the docker image under a different name (Note: do not include the version here). |
|
|
Specifies the docker image’s version. No need to change it unless you have a good reason to. |
<equal to chart version> |
|
The license used for activating cunoFS on the Driver. It needs to be a valid Professional or Enterprise license. |
<empty> |
|
Yaml array that you can populate with your s3/az/gs credential files. |
<emtpy> |
|
Enables out of the box support for RBAC clusters (deploys the required ClusterRole/ClusterRoleBinding). |
|
|
On Amazon EKS, associates IAM role to |
<empty> |
PersistentVolume options#
Warning
Note that due to K8s parameter passing design decisions, the boolean parameters require strings and not yaml booleans.
For this reason, please use "true" and "false" instead of true and false.
Yaml Value |
Description |
|---|---|
|
Can be any legal, unique name |
|
This value is ignored, but is required to be set by the CSI specification |
|
Set this to the name of the CSI driver you deployed which is |
|
Name of the volume, needs to be unique |
|
We support |
|
This is the cloud URI tht will be mounted to the target mountpath. If not specified, you can access s3, az and gz through the target + |
|
Set it to |
|
Set it to |
|
Set it to |
|
Set it to |
|
Set it to |
|
Sets the |
|
Sets the |
|
Enables cunoFS Fusion on the |
StorageClass options#
Yaml Value |
Description |
|---|---|
|
Can be any name as long as it’s unique |
|
The name of the driver, by default: |
|
|
|
Can be |
|
The bucket used to create volumes |
|
Optional. The subdirectory of the bucket where the |
|
These options will be passed down to the generated |
|
Tells cunoFS Fusion to use the given |
cunoFS Fusion Support#
The cunoFS CSI Driver, as of version v1.0.2, supports all access modes.
However, using a single PV on multiple pods at once with the ReadWriteMany access mode, on the same node or not, does not guarantee write consistency to the same file.
We offer a way around this limitation by using cunoFS Fusion.
cunoFS Fusion enables users to get the best out of object storage and traditional shared filesystems at the same time.
In the case of the cunoFS CSI Driver, it gives you the ability of writing to a PV with ReadWriteMany without potential issues with multiple writers.
cunoFS Fusion uses object storage with a supporting backing filesystem and will intelligently use the filesystem that has the best performance for large or small files, while ensuring that read and writes remain ordered.
For more information, please refer to the cunoFS Fusion documentation.
Static Allocation#
For static allocation, you need to deploy two PVs: one for cunoFS, and one for the backing mount.
Choose a backing mount that offers write consistency (Amazon EFS, Amazon EBS, a local NFS server, etc…), and deploy it with a PV and PVC, as if it was used by a Pod.
Then, refer to the PVC’s name in the spec.csi.volumeAttributes.fusion_pvc parameter of the cunoFS PV.
The cunoFS CSI Driver will mount the PV to itself and bind the two filesystems.
PV/PVC pairs defined by cluster admin#---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "backing-pv"
# <...>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backing-pvc
spec:
volumeName: "backing-pv"
# <...>
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: cunofs-pv
spec:
capacity:
storage: 16Ei # ignored, required
accessModes:
- ReadWriteMany
csi:
driver: cunofs.csi.com
volumeHandle: cunofs-csi-driver-volume
volumeAttributes:
# <...>
fusion_pvc: "backing-pvc" # gets the name of the pvc to try and mount to it
Warning
Please ensure that the PV/PVC pair you create has the same access mode as the cunoFS PV/PVC pair and compatible parameters (readonly, etc…).
If you have any issues deploying the cunoFS Fusion PV/PVC pair, pelase first ensure that the backing pair is correctly set up.
Dynamic Allocation#
The cunoFS CSI Driver supports dynamic provisioning of Fusion PV pairs.
Simply deploy a backing StorageClass and refer to it in the cunoFS StorageClasse's parameters.fusionStorageClass parameter.
The cunoFS CSI Driver will use it to generate and delete backing PVs alongside cunoFS PVs and bind them as needed.
StorageClass manifests defined by cluster admin#---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "backing-sc"
<...>
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cunofs-storageclass
provisioner: cunofs.csi.com
parameters:
cloud-type: s3
bucket: cuno-csi-testing
bucket-subdir: test_kubernetes
fusionStorageClass: "backing-sc" # Refer to the backing StorageClass
Warning
Please ensure that the deployed StorageClass can create PVs that bind to PVCs with ReadWriteMany of size 8Gi.
In a future release, you will be able to parameterise it.
RBAC Support#
By default, the cunoFS CSI Driver deploys a ServiceAccount, ClusterRole and ClusterRoleBinding to support Role-Based Access Control (RBAC) out of the box.
They are respectively deployed under the following names: <release name>-serviceaccount, <release name>-clusterrole and <release name>-clusterrolebinding.
This also means that the cunoFS CSI Driver supports Amazon EKS clusters out-of-the-box.
You can choose not to deploy the ClusterRole and ClusterRoleBinding by setting the rbac.useRBAC property to false in the values.yaml file:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart \
--set cunofsLicense.license="<license-text>" \
--set credsToImport="{<credentials-1>,<credential-2>, ... ,<credentials-N>}" \
--set rbac.useRBAC=false
Technical Details#
The cunoFS CSI Driver abides by the Kubernetes Container Storage Interace standard.
It implements the Node, Controller and Identity plugins and uses sidecar containers for simplifying its deployment and maintenance.
The CSI Driver is shipped into one binary, which can act as the Node or the Controller depending on the context (how it’s deployed and which sidecar containers are connected).
The helm chart deploys docker containers that have this binary preinstalled.
The Node plugin refers to the ability to mount and organise existing PersistentVolumes on a Kubernetes node.
The Controller plugin implements the ability to create PersistentVolumes dynamically through a StorageClass.
The Node and the Controller need to handle logic at different levels:
The
Nodeplugin needs to be deployed on every K8s Node, since it handles mounting logic that’s specific to each machine on which the application containers run. Therefore, it is deployed via a K8s DaemonSet. Additionally, these sidecar containers are shipped with theNode:- Liveness Probe
This sidecar container ensures that the driver remains responsive, and replaces the driver container on crash.
- Node Driver Registrar
This sidecar container registers the driver to the kubelet to simplify its discovery and communication.
The
Controllerplugin needs to be unique across a Kubernetes cluster, since it handles the lifecycle ofPersistentVolumes, which are K8s global objects. It is therefore managed through a K8s StatefulSet:- Liveness Probe
This sidecar container, like with the Node plugin, ensures that the driver remains responsive, and replaces the driver container on crash.
- External Provisioner
This sidecar container helps the driver interacting with the K8s API by listening to volume provisioning-related calls.
During the deployment, the cunoFS CSI Driver deploys the cunoFS license and cloud credentials as Secrets.
The license Secret is imported by the Node and the Controller through an environment variable.
The credential Secret is mounted to the Node and the Controller through a Projected Volume and sequentially imported by cunoFS.
Limitations#
Not every existing K8s optional feature is currently implemented in this driver.
Please contact support@cuno.io for specific feature inquiries.
Due to the internals of K8s, the cunoFS CSI Driver makes use of
cunoFS mountas a backend instead of regularcunoFS. This means that performance will be high, but not always as high as a regularcunoFSinstallation.Not every
cunoFSoption is currently available for use in the driver. Please refer to the configuration section for the available options.The
ReadWriteManyaccess mode doesn’t guarantee write consistency without cunoFS FusionThe
cunoFSCSI Driver currently doesn’t support CSI Ephemeral Volumes, raw block volumes, volume snapshotting, volume expansion, volume cloning and volume topology options.