Kubernetes CSI Driver#
The cunoFS CSI Driver facilitates seamless integration of your cloud storage services (Amazon S3, Google Cloud, and Azure Cloud) within a Kubernetes cluster. The driver is available through Helm under oci://registry-1.docker.io/cunofs/cunofs-csi-chart
. More information can be found on docker hub.
Install#
Ensure that Helm is installed. If not, follow the Helm installation guide
Deploy the cunoFS CSI Driver:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart
--version <chart_version>
--set cunofsLicense.license="<license-text>"
--set credsToImport="{<credentials-1>,<credential-2>, ... ,<credentials-N>}"
--set cunofsLicense.license
: (required) cunoFS license [more details]--set credsToImport
: (optional) cloud credentials [more details]
Display the status of the cunoFS CSI Driver resources:
kubectl get all -l app.kubernetes.io/name=cunofs-csi-driver
Note
To ensure that the cloud credentials are passed correctly provide them in base64 encoding. For example:
--set credsToImport="{$(cat creds-1.txt | base64), $(cat creds-2.json | base64)}"
Update#
Upgrade to the latest version:
helm upgrade --reuse-values cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart
--version <new_version>
Uninstall#
helm uninstall cunofs-csi-chart
Storage allocation#
The cunoFS CSI Driver support the following strategies:
Static provisioning#
To allocate storage statically, define one or more PV
(PersistentVolume) providing the bucket details and options:
apiVersion: v1
kind: PersistentVolume
metadata:
name: cunofs-pv
spec:
capacity:
storage: 16Ei # ignored but required
accessModes:
- ReadWriteOncePod # Currently only support "ReadWriteOncePod"
csi:
driver: cunofs.csi.com # required
volumeHandle: cunofs-csi-driver-volume
volumeAttributes:
root: "/cuno/s3/bucket/subdirectory/other_subdirectory" # optional
posix: "true" # optional
allow_root: "true" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC
(PersistentVolumeClaim):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "" # ensures that no dynamic provisioning occurs
resources:
requests:
storage: 16Ei # ignored but required
volumeName: cunofs-pv # PV metadata.name
Finally, cluster users can mount the PVC
:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Dynamic provisioning#
To allocate storage dynamically, define a StorageClass providing the bucket details and options:
apiVersion: storage.K8s.io/v1
kind: StorageClass
metadata:
name: cunofs-storageclass
provisioner: cunofs.csi.com
reclaimPolicy: Retain # default is Delete
parameters:
cloud-type: s3 # requires either of s3/az/gs
bucket: cuno-csi-testing # requires bucket that already exists
bucket-subdir: test_kubernetes # optional
# Options passed down to the PV:
posix: "true" # optional
allow_root: "true" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC
which has a reference to the StorageClass
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "cunofs-storageclass" # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Cluster users can mount the PVC
similarly to the static allocation case:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Alternatively, cluster users can create a generic inline volume which doesn’t require a PVC
:
apiVersion: v1
kind: Pod
metadata:
name: consumer-pod-dyn-inline
spec:
containers:
- name: cunofs-app-inline
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container, inline volume!' >> /data/generic-inline-k8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: inline-cuno-storage
mountPath: /data
volumes:
- name: inline-cuno-storage
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-inline-volume
spec:
accessModes: [ "ReadWriteOncePod" ]
storageClassName: cunofs-storageclass # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Note
The current version of cunoFS CSI Driver does not support CSI inline volumes
Configuration#
This section offers additional details about the configuration options for the cunoFS CSI driver.
Helm chart#
The Helm chart can be also installed, configured and deployed manually.
Download the chart manually:
helm pull --untar oci://registry-1.docker.io/cunofs/cunofs-csi-chart --version <chart_version>
Set the cunofsLicense.license
variable and import the cloud credentials:
# values.yaml file
cunofsLicense:
license: "<your license key>"
credsToImport:
- "<credential-1>"
- "<credential-2>"
- "<..>"
- "<credential-N>"
Finally, install the local chart by pointing to its directory:
helm install cunofs-csi-chart <path-to-chart>
Available options:
Yaml Value |
Description |
Default Value |
---|---|---|
cunofsCSIimage.pullPolicy |
Specifies how the docker image is deployed onto the Node and Controller. Only useful to change if self-hosting the docker image |
Always |
cunofsCSIimage.name |
Specifies the cunoFS csi docker image. Only useful to change if self-hosting the docker image |
docker.io/cunofs/cunofs_csi:latest |
cunofsLicense.license |
The license used for activating cunoFS on the Driver. It needs to be a valid Professional or Enterprise license |
<empty> |
credsToImport |
Yaml array that you can populate with your s3/az/gs credential files |
<emtpy> |
PersistentVolume options#
Yaml Value |
Description |
---|---|
metadata.name |
Can be any name as long as it’s unique |
spec.capacity.storage |
This value is ignored, but it needs to be set |
spec.csi.driver |
It’s required to set this to “cunofs.csi.com” |
spec.csi.volumeHandle |
Name of the volume, needs to be unique |
spec.accessModes |
We currently only support “ReadWriteOncePod” |
spec.csi.volumeAttributes.root |
This is the cloud URI tht will be mounted to the target mountpath. If not specified, you can access s3, az and gz through the target + ‘/az’ or ‘/gs’ or ‘/s3’ directories |
spec.csi.volumeAttributes.posix |
Set it to “true” to enforce strict posix mode for cunoFS |
spec.csi.volumeAttributes.allow_root: |
Set it to “true” to allow the root user to use the mount |
spec.csi.volumeAttributes.allow_other: |
Set it to “true” to allow all users to use the mount |
spec.csi.volumeAttributes.auto_restart: |
Set it to “true” to automatically restart cunoFS if an error occurs |
spec.csi.volumeAttributes.readonly: |
Set it to “true” to mount the volume as read only |
StorageClass options#
Yaml Value |
Description |
---|---|
metadata.name |
Can be any name as long as it’s unique |
provisioner |
Needs to be “cunofs.csi.com” |
reclaimPolicy |
“Retain” will not delete the generated PVs and their storage when the PVCs go out of scope, “Delete” will |
parameters.cloud-type |
Can be “s3”, “az” or “gs” |
parameters.bucket |
The bucket used to create volumes |
parameters.bucket-subdir |
Optional. The subdirectory of the bucket where the PVCs will get generated. Can be nested subdirectories like “dir/other_dir/yet_another_dir” |
parameters.posix, parameters.allow_root, parameters.allow_other, parameters.auto_restart, parameters.readonly |
These options will be passed down to the generated PV and behave the same way as described in the PersistentVolume options |
Technical Details#
The cunoFS CSI Driver abides by the Kubernetes Container Storage Interace standard.
It implements the Node
, Controller
and Identity
plugins and uses sidecar containers for simplifying its deployment and maintenance.
The CSI Driver is shipped into one binary, which can act as the Node
or the Controller
depending on the context (how it’s deployed and which sidecar containers are connected).
The helm chart deploys docker containers that have this binary preinstalled.
The Node
plugin refers to the ability to mount and organise existing PersistentVolumes
on a Kubernetes node.
The Controller
plugin implements the ability to create PersistentVolumes
dynamically through a StorageClass
.
The Node
and the Controller
need to handle logic at different levels:
The
Node
plugin needs to be deployed on every K8s Node, since it handles mounting logic that’s specific to each machine on which the application containers run. Therefore, it is deployed via a K8s DaemonSet. Additionally, these sidecar containers are shipped with theNode
:- Liveness Probe
This sidecar container ensures that the driver remains responsive, and replaces the driver container on crash.
- Node Driver Registrar
This sidecar container registers the driver to the kubelet to simplify its discovery and communication.
The
Controller
plugin needs to be unique across a Kubernetes cluster, since it handles the lifecycle ofPersistentVolumes
, which are K8s global objects. It is therefore managed through a K8s StatefulSet:- Liveness Probe
This sidecar container, like with the Node plugin, ensures that the driver remains responsive, and replaces the driver container on crash.
- External Provisioner
This sidecar container helps the driver interacting with the K8s API by listening to volume provisioning-related calls.
During the deployment, the cunoFS CSI Driver deploys the cunoFS license and cloud credentials as Secrets.
The license Secret
is imported by the Node
and the Controller
through an environment variable.
The credential Secret
is mounted to the Node
and the Controller
through a Projected Volume and sequentially imported by cunoFS.
Limitations#
Not every existing K8s
optional feature is currently implemented in this driver.
Please contact support@cuno.io for specific feature inquiries.
Due to the internals of
K8s
, thecunoFS
CSI Driver makes use ofcunoFS mount
as a backend instead of regularcunoFS
. This means that performance will be high, but not always as high as a regularcunoFS
installation.Not every
cunoFS
option is currently available for use in the driver. Please refer to the configuration section for the available options.There currently isn’t a way of easily using
cunoFS Fusion
with this driver.Currently, only the
ReadWriteOncePod
access mode is supported. You can always createPVs
with the same root, although this can be a bit more verbose than rusing the samePVC
orPV
.The
cunoFS
CSI Driver currently doesn’t support CSI Ephemeral Volumes, raw block volumes, volume snapshotting, volume expansion, volume cloning and volume topology options.This driver currently don’t support creating separate
ServiceAccounts
for each deployed component. The expected way of managing access privileges, if enabled in your cluster, is to deploy the driver in a separate namespace and managing the permissions of the defaultServiceAccount
.