V2.7.0 - Issues when deploying new k8s Cluster

Hi all,

I have this Rancher set on v2.7.0 that has an existing k8s cluster running. Nodepools, for simplicity reasons, lets say they are in subnet 192.168.10.0/24( .1, .2, .3 nodes). The nodepools are created with RKE, they are hosted on a vsphere infrastructure. They don’t share templates, as each of them has static ip address assigned(the same will apply for the next cluster below).

Now, I try to deploy a new k8s cluster, on the same subnet. I start with one nodepool, so lets say 192.1.68.1.50). The moment I start deploying the new cluster, the already existing cluster starts misbehaving. Kubelet stops working correctly, and I’m overloaded with alerts.

I’ve compared the templates, and I don’t see anything that could collide with the existing cluster, causing it to misbehave like this, so I wonder if there is any limitation on Rancher/K8s that causes this when deploying new clusters on existing subnets, even when there’s no overlapping.

Thanks in advance!