[Solved] Setting Max Pods

How do you change the Max Pods after you have created a cluster?

How do you set the Max Pods before you create a new cluster?

At 110 max pods, it’s hit very quickly with plenty of hardware resources left for two or three folds more pods.

Update: See superseb’s answer

Solution:

addon_job_timeout: 30
authentication: 
  strategy: "x509"
ignore_docker_version: true
# 
#   # Currently only nginx ingress provider is supported.
#   # To disable ingress controller, set `provider: none`
#   # To enable ingress on specific nodes, use the node_selector, eg:
#      provider: nginx
#      node_selector:
#        app: ingress
# 
ingress: 
  provider: "nginx"
kubernetes_version: "v1.11.3-rancher1-1"
monitoring: 
  provider: "metrics-server"
# 
#   # If you are using calico on AWS
# 
#      network:
#        plugin: calico
#        calico_network_provider:
#          cloud_provider: aws
# 
#   # To specify flannel interface
# 
#      network:
#        plugin: flannel
#        flannel_network_provider:
#          iface: eth1
# 
#   # To specify flannel interface for canal plugin
# 
#      network:
#        plugin: canal
#        canal_network_provider:
#          iface: eth1
# 
network: 
  options: 
    flannel_backend_type: "vxlan"
  plugin: "canal"
# 
#      services:
#        kube_api:
#          service_cluster_ip_range: 10.43.0.0/16
#        kube_controller:
#          cluster_cidr: 10.42.0.0/16
#          service_cluster_ip_range: 10.43.0.0/16
#        kubelet:
#          cluster_domain: cluster.local
#          cluster_dns_server: 10.43.0.10
# 
services: 
  etcd: 
    extra_args: 
      heartbeat-interval: 500
      election-timeout: 5000
    snapshot: false
  kube-api: 
    pod_security_policy: false
    service_node_port_range: "30000-32767"
  kubelet: 
    extra_args:
      max-pods: 500
ssh_agent_auth: false
1 Like

You can configure anything using https://rancher.com/docs/rke/v0.1.x/en/config-options/ (specifically https://rancher.com/docs/rke/v0.1.x/en/config-options/services/services-extras/#extra-args) and https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file

I can’t seem to find any configuration flags for “max pods” or “max” and “pods”

The parameter extra_args lets you add any argument to any service. Also see https://github.com/rancher/docs/pull/851/files

3 Likes

That was the perfect example. That was more clear than the links to the other documentation!

Thanks!

Well this seems to be the solution to my problem, but where the heck is this file located?? Can it be edited through the Rancher 2.x web interface? Do I have to edit a file locally on the node where Rancher is running?

I searched google for hours and cant seem to find the right way to do this :sob:

EDIT
Ok I finally found it. You can edit it though the Rancher 2.x web ui but only using the yaml editor. To do this:

  • Open your cluster dashboard of choice
  • Open Options […] -> Edit
  • Click the Edit as YAML button
  • Find the services section
  • Find the kubelet section
  • Add the following to it:
  services:
    [...]
    kubelet:
      fail_swap_on: false
      [...]
      extra_args:
        max-pods: '250'
1 Like

This is still a problem since you don’t get this option in imported clusters. When you go to edit cluster you only have
member roles
Labels and annotations. We have 1% memory used and 3% CPU used with 80% pod limit reached already. Very frustrating.

1 Like

I’m not able to verify that, but that does sound frustrating

I’m in the process of deploying Rancher using rke + letsencrypt, but hitting a wall where the certificates aren’t properly created, or is in some staging phase

Not sure yet

Imported clusters are defined and managed by whatever created them, and you’d need to change the setting there. Rancher only has access to the resources “inside” of the cluster; it has no idea what created the “outside” of the cluster itself, doesn’t know how to change its configuration, and doesn’t have the credentials that may be needed to change it.

Is there a way to go past 250 pods per node?
It would be awesome to have 500 pods per nodes so I can utilize the full cluster performance. Kinda dont know how to change the podCIDRs from 10.x.x.x/24 to 10.x.x.x/20 inside a imported RKE cluster.

So I am having the same issue with a cluster creating via the rancher interface. If I create new cluster and deploy the docker container as per instructions it goes off and sets up everything correctly, but as above I can’t alert it in the YAML. Does this mean I would better off creating the new bare metal cluster via rke which allows me to update max pods? Is there anyway of adding extra options on cluster creations??
Update: I am using Rancher 2.4.6

Ok, so I ended up using RKE on the bare metal cluster and adding the settings as per above. Then I used the import cluster option instead of the create new cluster. Not a solution but it works

Also would like to know how to increase the maxpods in those rancher interface created clusters. I’m using AWS EKS and do not see a way to increase the max-pods. This a huge bottleneck for us atm.

Will this work on RKE2 clusters? O.o

The answer is NO.
At least, from RKE 2 Cluster config, setting Kubelet args, agent args, etc for max-pods does nothing and it doesn’t reconcile… big problem. I just ordered more RAM for my systems and can’t fit all the pods because I can’t increase from a measly 110 pods per node…

Rancher 2.6.12, downstream cluster v1.23.16+rke2r1. This works for me:
image