[ERROR] [controlPlane] Failed to upgrade Control Plane: [[host rancher-w1 not ready]]

Hi everybody, I have set-up a Rancher Server from which I am trying to create a new cluster.
I have 2 VMs where both have been set-up to work as etcd, control plane and worker

Provisioning log is reporting the following

[INFO ] Initiating Kubernetes cluster
9:18:35 pm [INFO ] Successfully Deployed state file at [management-state/rke/rke-532279432/cluster.rkestate]
9:18:35 pm [INFO ] Building Kubernetes cluster
9:18:35 pm [INFO ] [dialer] Setup tunnel for host [192.168.1.202]
9:18:35 pm [INFO ] [dialer] Setup tunnel for host [192.168.1.201]
9:18:35 pm [INFO ] [network] Deploying port listener containers
9:18:37 pm [INFO ] [network] Successfully started [rke-cp-port-listener] container on host [192.168.1.202]
9:18:37 pm [INFO ] [network] Successfully started [rke-worker-port-listener] container on host [192.168.1.202]
9:18:37 pm [INFO ] [network] Port listener containers deployed successfully
9:18:37 pm [INFO ] [network] Running etcd <-> etcd port checks
9:18:38 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.201]
9:18:38 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.202]
9:18:38 pm [INFO ] [network] Running control plane → etcd port checks
9:18:39 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.202]
9:18:39 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.201]
9:18:39 pm [INFO ] [network] Running control plane → worker port checks
9:18:39 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.201]
9:18:39 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.202]
9:18:39 pm [INFO ] [network] Running workers → control plane port checks
9:18:40 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.202]
9:18:40 pm [INFO ] [network] Successfully started [rke-port-checker] container on host [192.168.1.201]
9:18:40 pm [INFO ] [network] Skipping kubeapi port check
9:18:40 pm [INFO ] [network] Removing port listener containers
9:18:40 pm [INFO ] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.201]
9:18:40 pm [INFO ] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.1.202]
9:18:40 pm [INFO ] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.201]
9:18:40 pm [INFO ] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.1.202]
9:18:41 pm [INFO ] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.201]
9:18:41 pm [INFO ] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.1.202]
9:18:41 pm [INFO ] [network] Port listener containers removed successfully
9:18:41 pm [INFO ] [certificates] Deploying kubernetes certificates to Cluster nodes
9:18:47 pm [INFO ] [reconcile] Rebuilding and updating local kube config
9:18:47 pm [INFO ] Successfully Deployed local admin kubeconfig at [management-state/rke/rke-532279432/kube_config_cluster.yml]
9:18:47 pm [INFO ] [reconcile] host [192.168.1.201] is a control plane node with reachable Kubernetes API endpoint in the cluster
9:18:47 pm [INFO ] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
9:18:47 pm [INFO ] [file-deploy] Deploying file [/etc/kubernetes/kube-api-authn-webhook.yaml] to node [192.168.1.201]
9:18:47 pm [INFO ] Successfully started [file-deployer] container on host [192.168.1.201]
9:18:47 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:47 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:47 pm [INFO ] Container [file-deployer] is still running on host [192.168.1.201]: stderr: , stdout:
9:18:48 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:48 pm [INFO ] [remove/file-deployer] Successfully removed container on host [192.168.1.201]
9:18:48 pm [INFO ] [file-deploy] Deploying file [/etc/kubernetes/kube-api-authn-webhook.yaml] to node [192.168.1.202]
9:18:49 pm [INFO ] Successfully started [file-deployer] container on host [192.168.1.202]
9:18:49 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:49 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:49 pm [INFO ] Container [file-deployer] is still running on host [192.168.1.202]: stderr: , stdout:
9:18:50 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:50 pm [INFO ] [remove/file-deployer] Successfully removed container on host [192.168.1.202]
9:18:50 pm [INFO ] [/etc/kubernetes/kube-api-authn-webhook.yaml] Successfully deployed authentication webhook config Cluster nodes
9:18:50 pm [INFO ] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.201]
9:18:50 pm [INFO ] Successfully started [file-deployer] container on host [192.168.1.201]
9:18:50 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:50 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:50 pm [INFO ] Container [file-deployer] is still running on host [192.168.1.201]: stderr: , stdout:
9:18:51 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.201]
9:18:51 pm [INFO ] [remove/file-deployer] Successfully removed container on host [192.168.1.201]
9:18:51 pm [INFO ] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.1.202]
9:18:52 pm [INFO ] Successfully started [file-deployer] container on host [192.168.1.202]
9:18:52 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:52 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:52 pm [INFO ] Container [file-deployer] is still running on host [192.168.1.202]: stderr: , stdout:
9:18:53 pm [INFO ] Waiting for [file-deployer] container to exit on host [192.168.1.202]
9:18:53 pm [INFO ] [remove/file-deployer] Successfully removed container on host [192.168.1.202]
9:18:53 pm [INFO ] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
9:18:53 pm [INFO ] [reconcile] Reconciling cluster state
9:18:53 pm [INFO ] [reconcile] Check etcd hosts to be deleted
9:18:53 pm [INFO ] [reconcile] Check etcd hosts to be added
9:18:54 pm [INFO ] Successfully started [etcd-fix-perm] container on host [192.168.1.202]
9:18:54 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:18:54 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:18:54 pm [INFO ] Container [etcd-fix-perm] is still running on host [192.168.1.202]: stderr: , stdout:
9:18:55 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:18:55 pm [INFO ] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.202]
9:18:55 pm [INFO ] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.202]
9:18:55 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.202]
9:19:05 pm [INFO ] [reconcile] Rebuilding and updating local kube config
9:19:05 pm [INFO ] Successfully Deployed local admin kubeconfig at [management-state/rke/rke-532279432/kube_config_cluster.yml]
9:19:05 pm [INFO ] [reconcile] host [192.168.1.201] is a control plane node with reachable Kubernetes API endpoint in the cluster
9:19:05 pm [INFO ] [reconcile] Reconciled cluster state successfully
9:19:05 pm [INFO ] Pre-pulling kubernetes images
9:19:05 pm [INFO ] Kubernetes images pulled successfully
9:19:05 pm [INFO ] [etcd] Building up etcd plane…
9:19:06 pm [INFO ] Successfully started [etcd-fix-perm] container on host [192.168.1.201]
9:19:06 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.201]
9:19:06 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.201]
9:19:06 pm [INFO ] Container [etcd-fix-perm] is still running on host [192.168.1.201]: stderr: , stdout:
9:19:07 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.201]
9:19:07 pm [INFO ] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.201]
9:19:08 pm [INFO ] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:08 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:09 pm [INFO ] Successfully started [etcd-fix-perm] container on host [192.168.1.202]
9:19:09 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:19:09 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:19:09 pm [INFO ] Container [etcd-fix-perm] is still running on host [192.168.1.202]: stderr: , stdout:
9:19:10 pm [INFO ] Waiting for [etcd-fix-perm] container to exit on host [192.168.1.202]
9:19:10 pm [INFO ] [remove/etcd-fix-perm] Successfully removed container on host [192.168.1.202]
9:19:10 pm [INFO ] [etcd] Successfully started [rke-log-linker] container on host [192.168.1.202]
9:19:10 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.202]
9:19:10 pm [INFO ] [etcd] Successfully started etcd plane… Checking etcd cluster health
9:19:35 pm [INFO ] [controlplane] Building up Controller Plane…
9:19:35 pm [INFO ] [sidekick] Sidekick container already created on host [192.168.1.201]
9:19:35 pm [INFO ] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.1.201]
9:19:35 pm [INFO ] [healthcheck] service [kube-apiserver] on host [192.168.1.201] is healthy
9:19:36 pm [INFO ] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:36 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:36 pm [INFO ] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.1.201]
9:19:36 pm [INFO ] [healthcheck] service [kube-controller-manager] on host [192.168.1.201] is healthy
9:19:37 pm [INFO ] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:37 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:37 pm [INFO ] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.1.201]
9:19:37 pm [INFO ] [healthcheck] service [kube-scheduler] on host [192.168.1.201] is healthy
9:19:38 pm [INFO ] [controlplane] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:38 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:38 pm [INFO ] [controlplane] Successfully started Controller Plane…
9:19:38 pm [INFO ] [worker] Building up Worker Plane…
9:19:38 pm [INFO ] [sidekick] Sidekick container already created on host [192.168.1.201]
9:19:38 pm [INFO ] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.1.201]
9:19:38 pm [INFO ] [healthcheck] service [kubelet] on host [192.168.1.201] is healthy
9:19:38 pm [INFO ] [worker] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:39 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:39 pm [INFO ] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.1.201]
9:19:39 pm [INFO ] [healthcheck] service [kube-proxy] on host [192.168.1.201] is healthy
9:19:39 pm [INFO ] [worker] Successfully started [rke-log-linker] container on host [192.168.1.201]
9:19:39 pm [INFO ] [remove/rke-log-linker] Successfully removed container on host [192.168.1.201]
9:19:39 pm [INFO ] [worker] Successfully started Worker Plane…
9:20:04 pm [INFO ] [controlplane] Processing controlplane hosts for upgrade 1 at a time
9:20:04 pm [INFO ] [controlplane] Adding controlplane nodes rancher-w2 to the cluster
9:20:04 pm [INFO ] Processing controlplane host rancher-w1
9:20:29 pm [ERROR] [controlPlane] Failed to upgrade Control Plane: [[host rancher-w1 not ready]]

Conditions are as per below:

Can somebody help?
Thanks

Solved … it was a space issue on /var

I have the same problem. But I don’t have a space problem in my /var.

I had the same problem. Is the problem solved?