Upgrading from 2.2.2 to 2.2.3 error: xit status 1, unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls

I have a few clusters, 2 on AWS launched using RKS and one importer from DigitalOcean Managed K8s (DOKS).

After I upgraded all the clusters are showing as provisioning with error:

 	Exit status 1, unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error unable to recognize "management-statefile_path_redacted": Get https://localhost:443/k8s/clusters/c-f7wxh/api?timeout=32s: remote error: tls: internal error 

However I am still able to access all the clusters, upgrade workloads and do everything as normal.

2 Likes

Any updates on this ?

Any update?
I’ve got in this same situation (Cluster is Providing with the same error: Get https://localhost:443/k8s/clusters/c-ntx4g/api?timeout=32s: remote error: tls: internal error unable to recognize “management-statefile_path_redacted”) after a rollback to v2.3.5 for a failed upgrade to v2.4.2, when my cluster became unable to add 1 more node to it (it registered as normal, but then no workload was ever able to go after “CreatingContainer” state when scheduled on the new node!).
To try to restore the cluster state, I’ve tried also an in-place upgrade of v2.3.5 to v2.3.6, I was also able to deploy new node-agent v2.3.6, rotate certificates, etc, but the red header on the cluster page always remain filled wit many identical error messages as stated.
The cluster remains in this same “Provisioning” state even after an upgrade of Kubernetes from 1.17.2 to 1.17.4 from within the edit cluster Rancher GUI page. Nodes show the new k8s version, but the red header is still there!