What are the options for accessing the overlay network that rancher manages? I’ve changed the subnet that rancher uses 10.42.0.0/16 to a private /24 that is route-able within the office environment here and tested. This works.
My question is now, how do I access containers that are now assigned address on that public network? All attempts fail , most likely due to the overlay network. Before, we had an OpenVPN instance listening in a container, but I do not much like that solution as we have multiple nodes and not very redundant / ease to maintain.
My take is that the overlay network is intended for communication between nodes, not for communication from the outside. For that, I’d use the host’s IP. That can be accessed from the Rancher metadata API from within the node. Or, if Rancher was set up correctly, by clicking on the ‘ports’ tab for the service.
You set up Docker on a host that is already routable in your network, and the overlay network assists in your Docker hosts communicating, but once a port is exported by the container, that port on the host would be your way to access the service.
From what (little) I understand of the implementation of the overlay network, I’d suspect you’re heading into deep waters if you try to extend it beyond the scope of Rancher managed hosts - Rancher keeps a VPN connection between each node on the network up-to-date so that traffic between hosts can be direct, and encrypted.
We are working to support CNI plugins and convert the current managed networking into one. This type of use-case would probably be better suited to a different networking provider plugin that e.g. just directly uses the provided subnet.