Large RKE cluster. Change podCIDR


We deployed an RKE cluster on 8 large nodes (512GB Ram and 64 CPU cores) using CANAL as the CNI.

In it’s default configuration, each host get’s configured with a FLANNEL_SUBNET=10.x.x.x/24.
We’d like to change this to LANNEL_SUBNET=10.x.x.x/22. (increase cidr size).

How can this be accomplished?
Can this be changed after a cluster has already been deployed?

We already increased the “max_pods” value for kubelet to accomodate more containers per host.
The goal is for this cluster to be able to comfortably run 2000+ containers.



I’m not sure 10x the supported number of pods per node is really going to work out well…


Hi! we have the same problem, in the cluster.yml RKE generates something like


but on node itself

# cat /run/flannel/subnet.env 

Why /24? I have no idea… But now I have errors while creating new pods:

NetworkPlugin cni failed to set up pod "*****masked *******" network: failed to allocate for range 0: no IP addresses available in range set:


@daniel.milani.bell have you solved it?


The default configuration is that the cluster gets a /16 and each node gets a /24 of it. So that allows 256 nodes with 256 pods each (minus gw, broadcast, etc). To change that you can set --node-cidr-mask-size as an extra_args entry on kube-controller.