ryan
May 23, 2021, 2:39pm
1
When I try to import an EKS cluster into rancher, the import fails with:
This cluster is currently **Waiting** . Cluster must have at least one managed nodegroup.
The AWS console confirms there are no nodegroups.
I have a perfectly healthy kubernetes cluster that uses an Auto Scaling Group instead of a Managed Node Group.
I created the cluster with terraform. I used “worker_groups” to build my nodes. The documentation I’ve read imply “worker_groups” is recommended because it gives you more control over your nodes over “node_groups”.
Q: Is managed node group a requirement?
Q2: Does Rancher support ASG instead?
Q3: How to make this work?
Thanks!
I have the same issue. The cluster imports and works fine. The issue is that Rancher keeps flapping the state from Active to Waiting. I think this should be handled in Rancher to remove that requirement. If the API is working and there is at least one healthy node, the cluster is Active.
Related issues I could find:
opened 12:34AM - 06 Feb 21 UTC
closed 02:39PM - 11 Feb 21 UTC
alpha-priority/0
area/eks
kind/bug-qa
**What kind of request is this (question/bug/enhancement/feature request):** bug…
**Steps to reproduce (least amount of steps as possible):**
- Create an EKS cluster in the EKS console. Do not add any nodegroups yet
- Import this cluster in rancher.
- Cluster will be stuck in Waiting state with message `Cluster must have at least one managed nodegroup.`
- Add a nodegroup in the cluster from the EKS console.
- The nodegroups are seen on the "Nodes" page in Rancher. But Edit cluster do not show the nodegroups
- The cluster is seen stuck in Waiting state with message "Cluster must have at least one managed nodegroup."
- Eks-operator logs say `time="2021-02-06T00:04:11Z" level=info msg="cluster [c-8d6w7] finished updating"`
**Expected Result:**
Cluster should come up Active
**Other details that may be helpful:**
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): on 2.5-head - commit id: `8201e080`
- Installation option (single install/HA): single
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Import EKS v2
- Kubernetes version (use `kubectl version`):
```
1.18 (Any)
```
opened 12:50AM - 12 Nov 20 UTC
closed 02:22PM - 12 Feb 21 UTC
<!--
Please search for existing issues first, then read https://rancher.com/doc… s/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
register an EKS cluster that does not have a managed nodegroup
**Result:**
errors is displayed that states "Cannot deploy agent without nodegroups. Deploy a nodegroup".
This is triggered by not having a _managed_ nodegroup. Also, it's not completely true that an agent cannot be deployed without a managed nodegroup, but amanged nodegroups are the supported way of managing node for an EKS cluster in rancher. Error should be changed to something like "Cluster must have a managed nodegroup. Agent may not deploy without one".
**Notes:**
Clarification for anyone curious:
`The new EKS implementation exclusively supports managed nodegroups. That's not to say that the cluster cannot have non managed nodegroup nodes, but at least one is needed so rancher can confirm it is able to deploy the rancher agent and it is the way to provision new nodes through rancher`