When I bring up my cluster, it is ignoring the network ranges I’ve specified in my cluster.yml. It is using the default values for the network ranges: 10.43.0.0/16 for services and 10.42.0.0/16 for pods, even though I’ve specified completely different ranges in my cluster.yml (see below). I’ve read through the documentation for the cluster.yml several times and cannot see what I’m doing wrong here. I was hoping someone could point me in the right direction on this. Thanks.
From cluster.yml
kube-api:
# IP range for any services created on Kubernetes
# This must match the service_cluster_ip_range in kube-controller
service_cluster_ip_range: 192.168.128.0/17
# Expose a different port range for NodePort services
service_node_port_range: 30000-32767
pod_security_policy: false
kube-controller:
# CIDR pool used to assign IP addresses to pods in the cluster
cluster_cidr: 192.168.0.0/17
# IP range for any services created on Kubernetes
# # This must match the service_cluster_ip_range in kube-api
service_cluster_ip_range: 192.168.128.0/17
From cluster.rkestate:
"kubeApi": {
"image": "rancher/hyperkube:v1.20.11-rancher1",
"serviceClusterIpRange": "10.43.0.0/16",
...
"kubeController": {
"image": "rancher/hyperkube:v1.20.11-rancher1",
"clusterCidr": "10.42.0.0/16",
Versons:
$ rke --version
rke version v1.3.1
$ grep PRETTY /etc/os-release
PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)"