The EKS cluster and the managed nodeGroups become Active after a few minutes.
I login to the Rancher UI to Register this EKS cluster. The cluster name appears in the drop down menu as well indicating there is no issue with the aws creds.
However after importing into Rancher I see the following error message Cluster health check failed: cluster agent is not ready.
Any pointers to this would be greatly appreciated.
Hello! Does your EKS cluster expose the Kubernetes API endpoint to the public? You would be able to find this under the “Networking” tab in the EKS console. If this is set to “Private” instead of “Public” or “Public and private” then Rancher won’t be able to communicate with the cluster.
If the API is private, then you could turn on public access in EKS console and re-add the cluster.
I’ve got this issue when, I’m trying to import my cluster, to cluster hosted Rancher… The already deployed Rancher shows hes cluster as local… some hours spend in logic issues…