Inter-container networking and external NAT

I have the following setup with three docker hosts provisioned in Openstack behind a firewall. The rancher server has an internal IP address (in the same subnet as docker hosts) and another external floating IP address (in a different network than the hosts, obtained by NAT-ing the first address).
If I setup the rancher server to use external floating IP address for registration, I get the following:

(all docker hosts are registered with the same IP address).

The inter-container network does not work, even if the network agent containers start.

The question is: has this setup any chance to work? Is there any way to instruct the rancher server to differentiate between hosts?

Thank you in advance,

PS: I’ve tried using the internal IP address for registration and everything works fine.

Network Agents communicate with each other and setup the IPSec tunnels between the public IP addresses for the hosts (as registered/shown in the host boxes as in your screenshots). So no there is no way it’s going to work if they’re all the same IP.

Using their internal IP addresses should work, unless you plan on adding other hosts that are on a different network. If you do then the hosts in each network would be islands that can talk to their neighbors but not across the water to the other island.

Hi Vincent and thank you so much for your answer.

I’m wondering if this is not limiting the use case of Rancher across clouds and virtualization providers. It needs a 1:1 NAT or available public addresses, isn’t so?


Despite the always impending IPv4 doom, most of the major cloud providers give them away free or at worst $1/mo, sometimes with no way not to get a public IP on the instance.

Though we don’t support it today, IPv6 neatly eliminates the problem with infinite public IPs and no NAT.