Large RKE cluster. Change podCIDR

Hi,
We deployed an RKE cluster on 8 large nodes (512GB Ram and 64 CPU cores) using CANAL as the CNI.

In it’s default configuration, each host get’s configured with a FLANNEL_SUBNET=10.x.x.x/24.
We’d like to change this to LANNEL_SUBNET=10.x.x.x/22. (increase cidr size).

How can this be accomplished?
Can this be changed after a cluster has already been deployed?

We already increased the “max_pods” value for kubelet to accomodate more containers per host.
The goal is for this cluster to be able to comfortably run 2000+ containers.

Thanks.

I’m not sure 10x the supported number of pods per node is really going to work out well…

Hi! we have the same problem, in the cluster.yml RKE generates something like

cluster_cidr: 10.42.0.0/16

but on node itself

# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

Why /24? I have no idea… But now I have errors while creating new pods:

NetworkPlugin cni failed to set up pod "*****masked *******" network: failed to allocate for range 0: no IP addresses available in range set: 10.42.0.1-10.42.0.254

@daniel.milani.bell have you solved it?

The default configuration is that the cluster gets a /16 and each node gets a /24 of it. So that allows 256 nodes with 256 pods each (minus gw, broadcast, etc). To change that you can set --node-cidr-mask-size as an extra_args entry on kube-controller.

Here is an example cluster.yaml.

kube-api:
  service_cluster_ip_range: 10.16.0.0/12
kube-controller:
  cluster_cidr: 10.0.0.0/12
  extra_args:
    node-cidr-mask-size: '16'
  service_cluster_ip_range: 10.16.0.0/12
kubelet:
  cluster_dns_server: 10.16.0.10