Well this seems to be the solution to my problem, but where the heck is this file located?? Can it be edited through the Rancher 2.x web interface? Do I have to edit a file locally on the node where Rancher is running?
I searched google for hours and cant seem to find the right way to do this
EDIT
Ok I finally found it. You can edit it though the Rancher 2.x web ui but only using the yaml editor. To do this:
This is still a problem since you don’t get this option in imported clusters. When you go to edit cluster you only have
member roles
Labels and annotations. We have 1% memory used and 3% CPU used with 80% pod limit reached already. Very frustrating.
I’m not able to verify that, but that does sound frustrating
I’m in the process of deploying Rancher using rke + letsencrypt, but hitting a wall where the certificates aren’t properly created, or is in some staging phase
Imported clusters are defined and managed by whatever created them, and you’d need to change the setting there. Rancher only has access to the resources “inside” of the cluster; it has no idea what created the “outside” of the cluster itself, doesn’t know how to change its configuration, and doesn’t have the credentials that may be needed to change it.
Is there a way to go past 250 pods per node?
It would be awesome to have 500 pods per nodes so I can utilize the full cluster performance. Kinda dont know how to change the podCIDRs from 10.x.x.x/24 to 10.x.x.x/20 inside a imported RKE cluster.
So I am having the same issue with a cluster creating via the rancher interface. If I create new cluster and deploy the docker container as per instructions it goes off and sets up everything correctly, but as above I can’t alert it in the YAML. Does this mean I would better off creating the new bare metal cluster via rke which allows me to update max pods? Is there anyway of adding extra options on cluster creations??
Update: I am using Rancher 2.4.6
Ok, so I ended up using RKE on the bare metal cluster and adding the settings as per above. Then I used the import cluster option instead of the create new cluster. Not a solution but it works
Also would like to know how to increase the maxpods in those rancher interface created clusters. I’m using AWS EKS and do not see a way to increase the max-pods. This a huge bottleneck for us atm.
The answer is NO.
At least, from RKE 2 Cluster config, setting Kubelet args, agent args, etc for max-pods does nothing and it doesn’t reconcile… big problem. I just ordered more RAM for my systems and can’t fit all the pods because I can’t increase from a measly 110 pods per node…