Unable to Register a EKS cluster into Rancher v2.5.1

Hi

I created a EKS along with the managed nodegroups using the following eksctl command:

eksctl create cluster --name hvg-eks-cluster --version 1.16 --nodegroup-name hvg-eks-workers --node-type t3.medium --node-volume-size=150 --nodes 3 --nodes-min 3 --nodes-max 6 --node-ami-family AmazonLinux2 --ssh-access --region=ap-south-1 --set-kubeconfig-context=true --managed=true

The EKS cluster and the managed nodeGroups become Active after a few minutes.

I login to the Rancher UI to Register this EKS cluster. The cluster name appears in the drop down menu as well indicating there is no issue with the aws creds.

However after importing into Rancher I see the following error message Cluster health check failed: cluster agent is not ready.
Any pointers to this would be greatly appreciated.

Hello! Does your EKS cluster expose the Kubernetes API endpoint to the public? You would be able to find this under the “Networking” tab in the EKS console. If this is set to “Private” instead of “Public” or “Public and private” then Rancher won’t be able to communicate with the cluster.

If the API is private, then you could turn on public access in EKS console and re-add the cluster.

Hi @thedadams, I see that the API server endpoint access is set to Public.
Attached Screenshot.

I get the same error when attempting to create an EKS cluster with Rancher. AWS creds have admin access so it should have nothing to do with IAM

Same issue here. Tried also with rancher 2.5.5 but no luck.

A possible source of this issue could be EKS nodes that cannot reach back to Rancher, either due to DNS resolution, routing or firewalling

I am having the same issue, I do not know how to fix it, Rancher Webinar seems pretty straightforward but in realty it doesn’t work

I’ve got this issue when, I’m trying to import my cluster, to cluster hosted Rancher… The already deployed Rancher shows hes cluster as local… some hours spend in logic issues…