Authentication#

cunoFS can connect to any of the major cloud storage providers using the native storage credentials.

If you have an S3-compatible object storage solution from another provider, you will first need to configure S3 API access.

Note

If you are trying to access S3 through an EC2 instance configured with an IAM role, no further configuration is needed and cunoFS will automatically authenticate using the AWS-managed configuration. You can skip to How to use cunoFS.

Getting your credentials#

Warning

Any credentials you use need to have sufficient permissions in order for cunoFS to discover and manage your data. This needs to include permission to list buckets. If this is not possible or desired, you must use the cuno creds pair options, for which instructions can be found here.

You will need the Access Key ID and Access Key Secret for an AWS IAM user with permission to access the S3 buckets you want to use. You will have stored these credentials somewhere when first creating the IAM user.

Alternatively, create a new IAM user with “programmatic access” (access using keys), by following the AWS User Guide: Creating an IAM user in your AWS account.

For further options and alternatives, consult our full guide on accessing S3 object storage.

Saving credentials as a file#

The file needs to be of the form:

aws_access_key_id = xxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxx

You can use any text editor to create the file, just remember to change the permissions on the file when you’re done to prevent other users from reading it:

chmod 0600 "<path to your credentials file>"

Alternatively, use our handy one-liner (after editing in your details):

Importing the credentials into cunoFS#

Note

The default location for imported credentials is the directory $XDG_CONFIG_HOME/cuno/creds (if unset, $XDG_CONFIG_HOME defaults to ~/.config). To use an alternative location, please set the CUNO_CREDENTIALS environment variable to point to this path. For example: export CUNO_CREDENTIALS=/home/user/my-cloud-credentials.

Note that you should not insert your credentials directly into these locations because the cuno creds import command also creates corresponding bucket entries, and adds appropriate configuration settings (region, URL path style, etc.).

Assuming you have saved your credentials in a file credentials.txt, run the following command to add these credentials to the local set of cunoFS managed credentials:

cuno creds import credentials.txt

This command will attempt to discover all the buckets that these credentials have access to, as well as the settings, limitations, and compatibility of these buckets. This may take a while if you have many buckets associated with the credentials you are importing.

Note

If you are using an S3-compatible service and are having problems, you can run a compatibility check:

cuno creds detectfeatures s3://bucket-to-test credentials.txt

This command will test S3-compatibility, settings, and limitations, and then reconfigure credentials based on this. It needs a bucket to be specified that it can write temporary files to for testing purposes.

Warning

Running feature detection will use up to a few gigabytes of bandwidth and may take a few minutes to complete depending on the machine’s connection speed and the S3-compatible storage provider.

Testing that your credentials work#

You can immediately test that your credentials work using a private bucket that you are happy to use for these purposes (which we will assume is called bucket1).

Note

Optional: after each command, you can confirm that the changes are reflect in your cloud or object storage provider’s standard GUI interface.

First, ensure that cunoFS is enabled by calling cuno. If you are using a bash or zsh terminal, this will prefix your prompt with (cuno). Otherwise, run cuno again to see if cunoFS has been successfully enabled (if it has, you will see the output INFO: CUNO already loaded).

Try listing your paired buckets:

ls s3://

Try listing objects:

ls s3://bucket1/

Try writing an object:

echo "hello world" > s3://bucket1/helloworld.txt

Try reading that file back:

cat s3://bucket1/helloworld.txt

Try deleting that file:

rm s3://bucket1/helloworld.txt