Rancher reimport - Token has been invalidated

Hi,

I am using Rancher 2.X.
I am having some trouble restoring the connection between Rancher and a K3s cluster which has been imported.
It was actually provisioned via Terraform Provider but the effect is the same.

Once Rancher is dissociated with the K3s cluster I am not able to reapply the charts in order to restore the connection.
I receive the following type of logs:

master01 k3s[45274]: E0416 11:15:06.936249   45274 authentication.go:65] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]

From the rancher UI:

Cluster health check failed: Failed to communicate with API server: Unauthorized

‘Unauthorized’ is the part that is confusing me as my expectation is that reapply the charts from the cluster import command found in /v3/clusterRegistrationTokens/<id>:system there should be no difference from before the import.
Disassociation can happen in multiple ways.

  • One example is simply deleting and removing resources from the cattle-system namespace by mistake.
  • The second is through performing the upgrade process for K3s where in some situations all state if lost on the cluster - a problem in early versions of K3s which we are trying to upgrade from.

I would expect that at any point in time applying the import command things would in effect return to their previous connected state unless there are even more things that need to be removed before attempting to restore this association.

kubectl apply -f https://rancher.<uri>/v3/import/cn6p5sq92c227zkf8j5vwzgbp5rxs82x2zjf9t79ln8vkjz9lzk4hm.yaml
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-f267f91 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created

Things are running using rancher/rancher-agent:v2.4.15

kubectl get pods -n cattle-system
NAME                                   READY   STATUS    RESTARTS   AGE
cattle-cluster-agent-8d4c86797-l86xp   1/1     Running   0          17m
cattle-node-agent-bdgwz                1/1     Running   0          17m
cattle-node-agent-kbwwp                1/1     Running   0          17m
cattle-node-agent-sjjdw                1/1     Running   0          17m
cattle-node-agent-sthp9                1/1     Running   0          17m

No big errors that I can tell from the cluster agent apart from the following.

kubectl logs cattle-cluster-agent-8d4c86797-l86xp -n cattle-system| grep rror
time="2021-04-16T11:14:51Z" level=error msg="Failed to read API for groups map[metrics.k8s.io/v1beta1:an error on the server (\"Internal Server Error: \\\"/apis/metrics.k8s.io/v1beta1?timeout=32s\\\": Unauthorized\") has prevented the request from succeeding]"
time="2021-04-16T11:15:13Z" level=error msg="Failed to read API for groups map[metrics.k8s.io/v1beta1:an error on the server (\"Internal Server Error: \\\"/apis/metrics.k8s.io/v1beta1?timeout=32s\\\": Unauthorized\") has prevented the request from succeeding]"

If I import this as a new cluster in Rancher this works fine.
Trying to reimport an existing cluster once broken does not seem to work.
I assume the cattle agent is creating some extra tokens somewhere programmatically which I have missed.

Any insights would be appreciated.
Much appreciated!