Restore rancher on k3s with etcd

I’m struggling with a test restore of a rancher installation.
I made a 3 node (VMs) cluster with k3s. The datastore-endpoint is a etcd on the same nodes.
On this k3s I installed rancher.
Now I try a DR scenario and kill the etcd DB after I made a snapshot. Then I restore the DB again from this snapshot. The etcd and k3s is working again but I miss the rancher installation:

k3s kubectl get node
NAME STATUS ROLES AGE VERSION
rnc-k3s-04 Ready master 5m32s v1.18.6+k3s1
rnc-k3s-05 Ready master 7m25s v1.18.6+k3s1
rnc-k3s-06 Ready master 6m20s v1.18.6+k3s1

k3s kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-8655855d6-ddm42 1/1 Running 1 7m22s
kube-system helm-install-traefik-5wnjb 0/1 Completed 1 7m22s
kube-system local-path-provisioner-6d59f47c7-chxv9 1/1 Running 1 7m22s
kube-system metrics-server-7566d596c8-ttvz7 1/1 Running 1 7m22s
kube-system svclb-traefik-f6d6r 2/2 Running 2 5m27s
kube-system svclb-traefik-rvhzd 2/2 Running 2 6m40s
kube-system svclb-traefik-tspkq 2/2 Running 2 5m28s
kube-system traefik-758cd5fc85-j5twr 1/1 Running 1 6m40s

What I’m missing here?