I have a K8s cluster that I created with RKE and has been added to Rancher. I have containers that are going to bind to port 80 and 443 on the host, and I don’t want to use an ingress controller, so I added the config to remove the ingress:
ingress: provider: none
When I run
rke up, it gets to a point where it tries to remove the existing ingress controller, but then fails.
INFO [addons] Setting up Metrics Server INFO [addons] Saving addon ConfigMap to Kubernetes INFO [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon INFO [addons] Executing deploy job.. INFO [addons] KubeDNS deployed successfully.. INFO [ingress] removing installed ingress controller WARN Failed to deploy addon execute job [rke-ingress-controller]: Failed to get job complete status: <nil> INFO [addons] Setting up user addons INFO [addons] no user addons defined INFO Finished building Kubernetes cluster successfully
I can see there is a pod that is supposed to delete the ingress, and it appears to have finished, since I don’t see the ingress controller pods anymore.
$ kubectl get pods --all-namespaces | grep ingress kube-system rke-ingress-controller-delete-job-ctbpg 1/1 Running 0 9m21s $ kubectl logs -n kube-system rke-ingress-controller-delete-job-ctbpg namespace "ingress-nginx" deleted
rke up again, results in the same errors.
Meanwhile, Rancher is not able to communicate with the cluster.
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready. Exit status 1, Error from server (ServiceUnavailable): the server is currently unable to handle the request
How do I recover it so Rancher can communicate with the cluster again.