How do you launch a snapshot at Longhorn?


The manual says that I can go back to any snapshot I did, but there is no single button to use the snapshot. You can create a snapshot, but how do you use it?

  1. The snapshot is used to save the history of the volume and is required before creating a backup.
  2. By clicking the snapshot icon, you will see the snapshot related operations: Revert (to the history status), Create backup, Delete.

There’s no item Revert.
Yeah. Probably not made a backup. I haven’t set it up yet.

Setup a local testing backupstore

It is not possible to connect to the test backup by following the instructions. It’s a strange secret.

  1. The snapshot revert is available only when the volume is attached in maintenance mode. I will put a reminder there when attached volume is not in maintenance node in the next release.

  2. For the test backupstore, you can use the following command to deploy a test minio service:

kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn-tests/master/manager/integration/deploy/backupstores/minio-backupstore.yaml

Then set Backup Target Credential Secret to minio-secret and Backup target to s3://backupbucket@us-east-1/backupstore in Longhorn UI Setting page.

I couldn’t start it. The container won’t start.

Can you deploy other workloads in your cluster as usual? If YES, then can you provide the output of the following commands for the debug?

kubectl describe pod longhorn-test-minio
kubectl logs longhorn-test-minio
> kubectl describe pod longhorn-test-minio
Name:         longhorn-test-minio
Namespace:    default
Priority:     0
Node:         node/173.212.219.253
Start Time:   Thu, 26 Dec 2019 17:59:46 +0000
Labels:       app=longhorn-test-minio
Annotations:  cni.projectcalico.org/podIP: 10.42.2.209/32
              kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container minio; cpu, memory limit for container minio
Status:       Running
IP:           10.42.2.209
IPs:
  IP:  10.42.2.209
Containers:
  minio:
    Container ID:  docker://59a0793f2d6cf2a4b3c794ac8860fa034686a412eb9b7bd764f67b2e59d95cd8
    Image:         minio/minio
    Image ID:      docker-pullable://minio/minio@sha256:76abc12c611e215926747e09beabbb1b4b5604b76339f8cb9ab3177be145c0a0
    Port:          9000/TCP
    Host Port:     0/TCP
    Command:
      sh
      -c
      mkdir -p /storage/backupbucket && exec /usr/bin/minio server /storage
    State:          Running
      Started:      Thu, 26 Dec 2019 19:46:42 +0000
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 26 Dec 2019 19:37:51 +0000
      Finished:     Thu, 26 Dec 2019 19:41:36 +0000
    Ready:          True
    Restart Count:  22
    Limits:
      cpu:     0
      memory:  4Mi
    Requests:
      cpu:     0
      memory:  4Mi
    Environment:
      MINIO_ACCESS_KEY:  <set to the key 'AWS_ACCESS_KEY_ID' in secret 'minio-secret'>      Optional: false
      MINIO_SECRET_KEY:  <set to the key 'AWS_SECRET_ACCESS_KEY' in secret 'minio-secret'>  Optional: false
    Mounts:
      /storage from minio-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4b99z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  minio-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-4b99z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4b99z
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                     From           Message
  ----     ------   ----                    ----           -------
  Warning  BackOff  2m15s (x399 over 107m)  kubelet, node  Back-off restarting failed container

error listing backups: error listing backup volumes: Failed to execute: /var/lib/rancher/longhorn/engine-binaries/longhornio-longhorn-engine-v0.7.0/longhorn [backup ls --volume-only s3://backupbucket@us-east-1/backupstore], output AWS Error: RequestError send request failed Get http://minio-service.default:9000/backupbucket?delimiter=%!F(MISSING)&prefix=backupstore%!F(MISSING): dial tcp 10.43.69.86:9000: connect: connection refused , stderr, time=“2019-12-26T19:49:50Z” level=error msg="{\n\n}" pkg=s3 time=“2019-12-26T19:49:50Z” level=error msg=“Fail to list s3: AWS Error: RequestError send request failed Get http://minio-service.default:9000/backupbucket?delimiter=%!F(MISSING)&prefix=backupstore%!F(MISSING): dial tcp 10.43.69.86:9000: connect: connection refused\n” pkg=s3 time=“2019-12-26T19:49:50Z” level=error msg=“AWS Error: RequestError send request failed Get http://minio-service.default:9000/backupbucket?delimiter=%!F(MISSING)&prefix=backupstore%!F(MISSING): dial tcp 10.43.69.86:9000: connect: connection refused\n” , error exit status 1

kubectl logs longhorn-test-minio
his command gives no results, no answer

  1. The reason for this test minio pod crash:
Last State:     Terminated
      Reason:       OOMKilled
  1. The backup page is not available if there is no valid backup target setting