Longhorn S3 credentials


Hi all

Any light on configure and pass AWS credentials to Longhorn for backup to S3?



Hi @spatialy

We just added s3 support in our development branch. If you want try to use it, please using this commit:

Notice no upgrade to this version (a.k.a don’t use existing data with it).

You can use:

kubectl create -Rf deploy

to install the dev version.

In order to use S3:

  1. Create a Kubernetes secret in your Longhorn (longhorn-system by default) namespace, name it e.g. s3-secret. Then put AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as data in it.
  2. In Longhorn UI, specify the name of secret (s3-secret) in Longhorn setting: Backup Target Credential Secret.
  3. Specify the Backup Target to be e.g. s3://backupbucket@us-east-1/xxx/longhorn/backupstore
  4. Try the Backup tab in the Longhorn UI, it shouldn’t report error. An empty list is expected.

You can refer to https://github.com/rancher/longhorn-tests/blob/master/manager/integration/deploy/test.yaml as example. We’re using Minio in testing.


Hi @yasker, thanks for the help … let me use the dev branch in a dev environment and let you know how work



Hi @yasker,

I am trying this on atest server running rancher 2.0.3.

volume creation from Longhorn UI, snapshots and S3 backups work fine .

But when I try to install mongo-replicaset from rancher catalog library, the pods fail to acquire the volumes:

The error in longhorn manager is:

Error: "unable to create volume pvc-50bc9cd6-788d-11e8-ad6d-f23c91fd079e: &{OwnerID: Size:4294967296 Frontend: FromBackup: NumberOfReplicas:3 StaleReplicaTimeout:30 NodeID: EngineImage: RecurringJobs:[]}: invalid volume frontend specified: "

Many thanks!



It’s a bug in the development branch. We’ve just fixed it. If you’re using the latest master, can you try if using yasker/longhorn-manager:dd69b3c solves your problem? Just run the commands below in the longhorn-manager directory.

sed -i "s/rancher\/longhorn-manager:885b53a/yasker\/longhorn-manager:dd69b3c/g" deploy/02-components/01-manager.yaml
kubectl delete -f deploy/02-components/01-manager.yaml
kubectl create -f deploy/02-components/01-manager.yaml


@yasker Thanks! this is brilliant.
Works great. ANd that was fast :slight_smile:

I have tried so far a mongdb replicaset with 3 memebers and pvc, as well as prometheus/grafana.

All pvcs went through fine except for the grafana one, but creating it manually via kubectl/yaml and killing the pod solved the issue.

As I will be testing longhorn extensively where is the best place to give you feedback / file issues ?

Many thanks!




Glad it works!

In the future, you can fill the issues at: https://github.com/rancher/longhorn/issues