Trying to remove broken Rancher cluster node (rke)

One of my Rancher clusters (Rancher itself) shows a broken node. For an unknown reason there are no containers on this node anymore.

I tried to remove the node using the information in https://rancher.com/docs/rke/latest/en/managing-clusters/. I removed the broken node from the cluster.yml and ran rke up command. rke detects that it should remove the (now missing in cluster.yaml) broken node but then fails due to a certificate validation error:

$ ./rke_linux-amd64-1.1.2 up --config RANCHER2_TEST/3-node-rancher-test.yml
INFO[0000] Running RKE version: v1.1.2
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates
INFO[0000] [certificates] Generating Kubernetes API server certificates
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] [certificates] Generating kube-etcd-192-168-253-12 certificate and key
INFO[0000] [certificates] Generating kube-etcd-192-168-253-14 certificate and key
INFO[0000] [certificates] Deleting unused certificate: kube-etcd-192-168-253-13
INFO[0000] Successfully Deployed state file at [RANCHER2_TEST/3-node-rancher-test.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [192.168.253.14]
INFO[0000] [dialer] Setup tunnel for host [192.168.253.12]
INFO[0000] [network] No hosts added existing cluster, skipping port check
INFO[0000] [certificates] kube-apiserver certificate changed, force deploying certs
INFO[0000] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0000] Checking if container [cert-deployer] is running on host [192.168.253.12], try #1
INFO[0000] Checking if container [cert-deployer] is running on host [192.168.253.14], try #1
INFO[0000] Image [rancher/rke-tools:v0.1.50] exists on host [192.168.253.12]
INFO[0000] Image [rancher/rke-tools:v0.1.50] exists on host [192.168.253.14]
INFO[0001] Starting container [cert-deployer] on host [192.168.253.12], try #1
INFO[0001] Starting container [cert-deployer] on host [192.168.253.14], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.168.253.14], try #1
INFO[0001] Checking if container [cert-deployer] is running on host [192.168.253.12], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.168.253.14], try #1
INFO[0006] Removing container [cert-deployer] on host [192.168.253.14], try #1
INFO[0006] Checking if container [cert-deployer] is running on host [192.168.253.12], try #1
INFO[0006] Removing container [cert-deployer] on host [192.168.253.12], try #1
INFO[0006] [reconcile] Rebuilding and updating local kube config
INFO[0006] Successfully Deployed local admin kubeconfig at [RANCHER2_TEST/kube_config_3-node-rancher-test.yml]
INFO[0006] Successfully Deployed local admin kubeconfig at [RANCHER2_TEST/kube_config_3-node-rancher-test.yml]
INFO[0006] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0006] [reconcile] Reconciling cluster state
INFO[0006] [reconcile] Check etcd hosts to be deleted
INFO[0006] [remove/etcd] Removing member [etcd-192.168.253.13] from etcd cluster
WARN[0008] [reconcile] Failed to delete etcd member [etcd-192.168.253.13] from etcd cluster
INFO[0008] [reconcile] Check etcd hosts to be added
INFO[0008] [hosts] host [192.168.253.13] has another role, skipping delete from kubernetes cluster
INFO[0008] [dialer] Setup tunnel for host [192.168.253.13]
INFO[0008] [worker] Tearing down Worker Plane..
INFO[0008] [worker] Host [192.168.253.13] is already a controlplane host, nothing to do.
INFO[0008] [worker] Successfully tore down Worker Plane..
INFO[0008] [hosts] Host [192.168.253.13] is already a controlplane or etcd host, skipping cleanup.
INFO[0008] [hosts] Cordoning host [192.168.253.13]
FATA[0033] Failed to delete controlplane node [192.168.253.13] from cluster: Get https://192.168.253.14:6443/api/v1/nodes: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca")

This is a Rancher cluster setup with RKE, currently with Rancher 2.2.8 and the working nodes currently run K8s 1.13.10.

Any ideas?