How to specify which interface in the “multiple network” support in vSphere provisioning - to use?
My VMs are randomly selecting an interface. Only one is reachable from Rancher.
Hello Anyone? To recap, i have two VSphere networks, one is management and one is payload. I rejigged the management to be first so usually nodes come up and use that IP and life is good, but occasionally they will pick up IPs from the payload network and thus not be able to communicate with the rest of the node or the masters. so i need to delete them and have them recreated.
Is there ANYTHING Rancher can do, to say - build RKE only on “this” network and build K8 only on “this” network ?? Sorry if this is obvious - but it’s not to me.
Looks like it is a problem with process and yaml structure. After painstakingly created line by line cluster config and rebuilding, I was able to get the “public” network applied to this configuration, and it is working good in terms of K8 on the one isolated IP and rancher running on management.
I too am having a similar issue. I want to deploy a single node cluster via RKE and vSphere Driver with two NICs. The second NIC is going to later be used with a custom CNI, but for now I just want it ignored. The cluster never properly creates and passes health checks. How did you force it to use your particular interface always?