Kubernetes CSI Driver#
The cunoFS CSI Driver facilitates seamless integration of your cloud storage services (Amazon S3, Google Cloud, and Azure Cloud) within a Kubernetes cluster. The driver is available through Helm under oci://registry-1.docker.io/cunofs/cunofs-csi-chart
. More information can be found on docker hub.
Install#
Ensure that Helm is installed. If not, follow the Helm installation guide
Deploy the cunoFS CSI Driver:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart \
--set cunofsLicense.license="<license-text>" \
--set credsToImport="{<credentials-1>,<credential-2>, ... ,<credentials-N>}"
--set cunofsLicense.license
: (required) cunoFS license [more details]--set credsToImport
: (optional) cloud credentials [more details]
Display the status of the cunoFS CSI Driver resources:
kubectl get all -l app.kubernetes.io/name=cunofs-csi-driver
Note
For security reasons, helm doesn’t allow access to files via paths.
Therefore, you need to provide the credential file contents credsToImport
, and not the path.
To ensure that the cloud credentials are passed correctly, please provide them in base64 encoding. For example:
--set credsToImport="{$(cat creds-1.txt | base64), $(cat creds-2.json | base64)}"
Update#
Upgrade to the latest version:
helm upgrade --reuse-values cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart
You can append the --version <version>
to upgrade to a specific version.
Uninstall#
helm uninstall cunofs-csi-chart
Storage allocation#
The cunoFS CSI Driver support the following strategies:
Static provisioning#
To allocate storage statically, define one or more PV
(PersistentVolume) providing the bucket details and options:
PV
manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolume
metadata:
name: cunofs-pv
spec:
capacity:
storage: 16Ei # ignored but required
accessModes:
- ReadWriteOncePod # Currently only support "ReadWriteOncePod"
csi:
driver: cunofs.csi.com # required
volumeHandle: cunofs-csi-driver-volume
volumeAttributes:
root: "/cuno/s3/bucket/subdirectory/other_subdirectory" # optional
posix: "true" # optional
allow_root: "false" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC
(PersistentVolumeClaim):
PVC
manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "" # ensures that no dynamic provisioning occurs
resources:
requests:
storage: 16Ei # ignored but required
volumeName: cunofs-pv # PV metadata.name
Finally, cluster users can mount the PVC
:
Pod
manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Dynamic provisioning#
To allocate storage dynamically, define a StorageClass providing the bucket details and options:
StorageClass
manifest defined by cluster admin#apiVersion: storage.K8s.io/v1
kind: StorageClass
metadata:
name: cunofs-storageclass
provisioner: cunofs.csi.com
reclaimPolicy: Retain # default is Delete
parameters:
cloud-type: s3 # requires either of s3/az/gs
bucket: cuno-csi-testing # requires bucket that already exists
bucket-subdir: test_kubernetes # optional
# Options passed down to the PV:
posix: "true" # optional
allow_root: "false" # optional
allow_other: "true" # optional
auto_restart: "true" # optional
readonly: "true" # optional
Then, define a PVC
which has a reference to the StorageClass
:
PVC
manifest defined by cluster admin#apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cunofs-pvc
spec:
accessModes:
- ReadWriteOncePod
storageClassName: "cunofs-storageclass" # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Cluster users can mount the PVC
similarly to the static allocation case:
Pod
manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod
spec:
containers:
- name: cunofs-app
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container!' > /data/s3/cuno-csi-testing/K8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: cunofs-pvc # PVC metadata.name
Alternatively, cluster users can create a generic inline volume which doesn’t require a PVC
:
Pod
manifest defined by cluster user#apiVersion: v1
kind: Pod
metadata:
name: consumer-pod-dyn-inline
spec:
containers:
- name: cunofs-app-inline
image: centos
command: ["/bin/sh"]
args: ["-c", "echo 'Hello from the container, inline volume!' >> /data/generic-inline-k8s_$(date -u).txt; tail -f /dev/null"]
volumeMounts:
- name: inline-cuno-storage
mountPath: /data
volumes:
- name: inline-cuno-storage
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-inline-volume
spec:
accessModes: [ "ReadWriteOncePod" ]
storageClassName: cunofs-storageclass # StorageClass metadata.name
resources:
requests:
storage: 16Ei # ignored but required
Note
The current version of cunoFS CSI Driver does not support CSI inline volumes
Configuration#
This section offers additional details about the configuration options for the cunoFS CSI driver.
Helm chart#
The Helm chart can be also installed, configured and deployed manually.
Download the chart manually:
helm pull --untar oci://registry-1.docker.io/cunofs/cunofs-csi-chart
Set the cunofsLicense.license
variable and import the cloud credentials:
# values.yaml file
cunofsLicense:
license: "<your license key>"
credsToImport:
- "<credential-1>"
- "<credential-2>"
- "<..>"
- "<credential-N>"
Finally, install the local chart by pointing to its directory:
helm install cunofs-csi-chart <path-to-chart>
Available options:
Yaml Value |
Description |
Default Value |
---|---|---|
driverName |
Optionally change the name of the deployed driver. Only useful if you want to deploy several instances of the driver. |
cunofs.csi.com |
cunofsCSIimage.pullPolicy |
Specifies how the docker image is deployed onto the Node and Controller. Only useful to change if self-hosting the docker image. |
Always |
cunofsCSIimage.name |
Specifies the cunoFS CSI docker image. Only useful to change if self-hosting the docker image under a different name (Note: do not include the version here). |
cunofs/cunofs_csi |
cunofsCSIimage.version |
Specifies the docker image’s version. No need to change it unless you have a good reason to. |
<equal to chart version> |
cunofsLicense.license |
The license used for activating cunoFS on the Driver. It needs to be a valid Professional or Enterprise license. |
<empty> |
credsToImport |
Yaml array that you can populate with your s3/az/gs credential files. |
<emtpy> |
rbac.useRBAC |
Enables out of the box support for RBAC clusters (deploys the required ClusterRole/ClusterRoleBinding). |
true |
eks.iam_arn |
On Amazon EKS, support for importing buckets via IAM roles. |
<empty> |
PersistentVolume options#
Yaml Value |
Description |
---|---|
metadata.name |
Can be any name as long as it’s unique |
spec.capacity.storage |
This value is ignored, but is required to be set by the CSI specification |
spec.csi.driver |
It’s required to set this to “cunofs.csi.com” |
spec.csi.volumeHandle |
Name of the volume, needs to be unique |
spec.accessModes |
We currently only support “ReadWriteOncePod” |
spec.csi.volumeAttributes.root |
This is the cloud URI tht will be mounted to the target mountpath. If not specified, you can access s3, az and gz through the target + ‘/az’ or ‘/gs’ or ‘/s3’ directories |
spec.csi.volumeAttributes.posix |
Set it to “true” to enforce strict posix mode for cunoFS |
spec.csi.volumeAttributes.allow_root: |
Set it to “true” to allow only the root user to access the mount. Overrides allow_other |
spec.csi.volumeAttributes.allow_other: |
Set it to “true” to allow all users to use the mount (recommended) |
spec.csi.volumeAttributes.auto_restart: |
Set it to “true” to automatically restart the cunoFS mount if an error occurs |
spec.csi.volumeAttributes.readonly: |
Set it to “true” to mount the volume as read only |
StorageClass options#
Yaml Value |
Description |
---|---|
metadata.name |
Can be any name as long as it’s unique |
provisioner |
Needs to be “cunofs.csi.com” |
reclaimPolicy |
“Retain” will not delete the generated PVs and their storage when the PVCs go out of scope, “Delete” will |
parameters.cloud-type |
Can be “s3”, “az” or “gs” |
parameters.bucket |
The bucket used to create volumes |
parameters.bucket-subdir |
Optional. The subdirectory of the bucket where the PVCs will get generated. Can be nested subdirectories like “dir/other_dir/yet_another_dir” |
parameters.posix, parameters.allow_root, parameters.allow_other, parameters.auto_restart, parameters.readonly |
These options will be passed down to the generated PV and behave the same way as described in the PersistentVolume options |
RBAC Support#
By default, the cunoFS CSI Driver deploys a ServiceAccount
, ClusterRole
and ClusterRoleBinding
to support Role-Based Access Control (RBAC) out of the box.
They are respectively deployed under the following names: <release name>-serviceaccount
, <release name>-clusterrole
and <release name>-clusterrolebinding
.
This means that we support Amazon EKS out-of-the-box.
You can choose not to deploy the ClusterRole
and ClusterRoleBinding
by setting the rbac.useRBAC
property to false
in the values.yaml
file:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart \
--set cunofsLicense.license="<license-text>" \
--set credsToImport="{<credentials-1>,<credential-2>, ... ,<credentials-N>}" \
--set rbac.useRBAC=false
EKS IAM roles#
The cunoFS CSI Driver supports importing buckets through IAM roles on Amazon EKS.
Simply define the IAM role’s ARN by setting the eks.iam_arn
property in the values.yaml
file or by passing it to helm through --set
while deploying it:
helm install cunofs-csi-chart oci://registry-1.docker.io/cunofs/cunofs-csi-chart \
--set cunofsLicense.license="<license-text>" \
--set eks.iam_arn="<your IAM role's ARN>"
You should be able to access any buckets that is permitted to the IAM role without extra credentials.
Note
Make sure that the IAM role you provide has the correct permissions, including listing S3 buckets and accessing them.
Technical Details#
The cunoFS CSI Driver abides by the Kubernetes Container Storage Interace standard.
It implements the Node
, Controller
and Identity
plugins and uses sidecar containers for simplifying its deployment and maintenance.
The CSI Driver is shipped into one binary, which can act as the Node
or the Controller
depending on the context (how it’s deployed and which sidecar containers are connected).
The helm chart deploys docker containers that have this binary preinstalled.
The Node
plugin refers to the ability to mount and organise existing PersistentVolumes
on a Kubernetes node.
The Controller
plugin implements the ability to create PersistentVolumes
dynamically through a StorageClass
.
The Node
and the Controller
need to handle logic at different levels:
The
Node
plugin needs to be deployed on every K8s Node, since it handles mounting logic that’s specific to each machine on which the application containers run. Therefore, it is deployed via a K8s DaemonSet. Additionally, these sidecar containers are shipped with theNode
:- Liveness Probe
This sidecar container ensures that the driver remains responsive, and replaces the driver container on crash.
- Node Driver Registrar
This sidecar container registers the driver to the kubelet to simplify its discovery and communication.
The
Controller
plugin needs to be unique across a Kubernetes cluster, since it handles the lifecycle ofPersistentVolumes
, which are K8s global objects. It is therefore managed through a K8s StatefulSet:- Liveness Probe
This sidecar container, like with the Node plugin, ensures that the driver remains responsive, and replaces the driver container on crash.
- External Provisioner
This sidecar container helps the driver interacting with the K8s API by listening to volume provisioning-related calls.
During the deployment, the cunoFS CSI Driver deploys the cunoFS license and cloud credentials as Secrets.
The license Secret
is imported by the Node
and the Controller
through an environment variable.
The credential Secret
is mounted to the Node
and the Controller
through a Projected Volume and sequentially imported by cunoFS.
Limitations#
Not every existing K8s
optional feature is currently implemented in this driver.
Please contact support@cuno.io for specific feature inquiries.
Due to the internals of
K8s
, thecunoFS
CSI Driver makes use ofcunoFS mount
as a backend instead of regularcunoFS
. This means that performance will be high, but not always as high as a regularcunoFS
installation.Not every
cunoFS
option is currently available for use in the driver. Please refer to the configuration section for the available options.There currently isn’t a way of easily using
cunoFS Fusion
with this driver.Currently, only the
ReadWriteOncePod
access mode is supported. You can always createPVs
with the same root, although this can be a bit more verbose than rusing the samePVC
orPV
.The
cunoFS
CSI Driver currently doesn’t support CSI Ephemeral Volumes, raw block volumes, volume snapshotting, volume expansion, volume cloning and volume topology options.