I have a Harvester cluster. I want to create 3 VMs. The VMs will be part of a Kubernetes cluster. For this, the VMs need constant IPs, which causes issues with the mgmt cluster network which insists on giving a new IP to VMs on restart.
Currently, I do the following:
- Set node specs
- Choose VM image (openSUSE)
- Set network options (use bridge, not masquerade)
- Add cloud config for static IP.
However, when the VM is created, it is still assigned an IP by harvester.
Harvester will nonetheless provide another IP address to the VM, and the VM will not be accessible on either of the 2 IPs.
If it’s relevant, I installed the harvester-vm-dhcp-controller addon for allocating IPs in my LAN to VMs in Harvester. However, the issue persists whether the addon is enabled or not.
I have also created VM networks, both DHCP and manual, but there is no change.
I got it to work, for the most part. I also learnt a bit more about networks in Harvester. So, now I have a new VM network called rancher with manual IPs which I set in the route tab (172.16.0.1/24 w/ 172.16.0.1 gateway)
When I create the VM, I leave the mgmt network as it is and then add the new rancher network. When the VM is created, ssh to the IP I set gives a “no route to host” error. ssh to the IP given by the mgmt network gives the same. I figure that at this point this is an issue with my own configuration of the network.
To better troubleshoot, I created a test network with DHCP instead of static IPs. In this case, the VM does get an IP from the mgmt network, which I can use to ssh to the VM from the host harvester node (I can’t use the test-network IP to ssh into the VM, though that’s acceptable for me)
After going through everything, I think it’s acceptable to be assigned a mgmt IP as without it, the I can’t ssh into the VM from the host. Just, as long as I have a static IP somewhere in there for static inter-VM networking.
This is the cloud config used now for the static IP: