Rancher Container Networking

I’m attempting to build a cluster in which several of the VMs are on different sub nets, but within the same cluster. Meaning, one group of VMs is on sub net 1, and another group is on sub net 2. Each sub net would have access to the other based on firewall rules. Even with everything open, it appears that my cluster cannot communicate among itself. Has anyone every tried this, and have they had any issues?

Rancher

Are you attempting to use the ‘overlay’ network, with IP addresses in the range 10.42.*?

If so, this uses VPN technology to communicate between nodes. Traffic goes from the container itself, to the network agent (rancher/agent-instance), which manages a VPN connection to other nodes.

For traffic between your nodes to work, this VPN will need to have access, on the underlying IPs, to ports 500(UDP) and 4500(UDP). With that enabled, the overlay network should work for you between your subnets.

There are scenarios in which overlay networking doesn’t behave correctly. First step is to open a console on the network agent itself, and try pinging itself, using its 10.42 IP. Then try to ping a network agent on another host on the same subnet. If that is working, then your overlay network is fine, and it is just a question of getting the underlying network configured correctly.

1 Like

@Upayavira: Thanks for sharing the information.
@badams: Please check if the ports are opened as pointed out by @Upayavira. Also please check the logs of the Network Agent and charon.

So I did a clean install, and am now on 1.1.0-dev5 I believe, and this issue has been resolved. At any rate, the ports between nodes were all open, including the specific UDP ports needed for traffic flow. I think there must have been some problems caused when I restarted my cluster too many times.