Communicating with Rancher IP's from external services

Would it be possible to somehow resolve the rancher IP’s externally, outside of rancher?
lets say I have some services running in Rancher, and some running on bare metal outside of Rancher.

The services inside Rancher can communicate with the external ones as the containers can resolve the external IP’s.
But what about the external services, is it somehow possible to make them able to access the Rancher 10.42.x.y IP’s?

What would be the correct way to go about this.
I don’t want to do host port mapping on the host and containers.

What I want to accomplish is to run an external Consul cluster, and then have all services join that and be able to communicate.
It it possible to connect the external servers to use the rancher network agents somehow to resolve IP’s?

Is this what I need? External DNS Server

The network setup by rancher is an SDN network. The services you run externally, run on underlay network or traditional network. It is possible for containers to communicate with external network, if that external network is public or as the same network as the rancher host. This is the case we use in clouds, for example, aws.

Application containers run in rancher SDN network and we connect the containers to AWS RDS. There is no issue in this.

However, the reverse is not possible as far as I know, simply because, rancher SDN use private network address as overlay network. So, an external service in external network, can’t resolve back to a service in rancher SDN simply because its a private address.

The only possible way to do so, is to port map the container to host. but I believe you don’t want to do that.

Consul, etcd service discovery set up are possible if we use docker directly. But I doubt whether we could do the same with rancher.

Is that the case even if using the stack name of the container and register that in consul instead of the IP?
woudnt it be possible to somehow have the external services resolve the stack names using some DNS magic?

I’m clueless when it comes to networking :-/

Service Discovery uses DNS. Rancher uses RancherDNS for service discovery of containers. Regardless of whether we use IP or Domain Name, ultimately we are going to end up with IP address as the domain name resolution will return IP address.

In this case, as the containers will be in overlay network with private address and the external service which will be running on non overlay network or underlay network and so, wont be able to communicate with the container service.

Only way I believe that will work is, to map container to host port.

overlay network to underlay network is possible.
Underlay to overlay network is not possible without port mapping.

So could I instead use the random port mapping feature together with the Rancher Meta API to first assign a random port to the container, and then use the Meta API to extract what port and host IP my container has on the outside?

That would pretty much give me the same behavior w/o having the issue of colliding host ports.

I’m not sure what is your use case. But I would use Rancher’s HAproxy to mount all the containers on one port and then reach them using domain name. This way, you can use same IP address and same port for all the containers and yet access them individually. This is what I have done for my environment.

For example, you can launch a containers for app1 and app2 with www.app1.com and www.app2.com as domain names. This way you use same port 80 and same IP address, yet you could access the containers app1 and app2 via www.app1.com and www.app2,com without any issues.

This is what recommended earlier. Port mapping solves issues of underlay to overlay network. If you follow the above procedure, you wouldn’t need to expose multiple ports and play with ACL.

hello guys,

underlay to overlay network should be possible if you control the underlay network.

example if the underlay and overlay networks use rfc1918 and both in separate subnets we could imagine to be able to permit the communication between both networks without mapping or masquerade if rancher have a “virtual router”.

if yes we have just to configure the underlay network to go thru the vR in the way underlay to overlay network communication and maybe for the reverse as well to escape the masquerade rules used by default.

I never tried this kind of setup but in theory it should work.

My only unknown is how-to make a container as vR :grin:

Whoah!

Anything like what’s being proposed here is going to require some serious, hard engineering, for possibly little gain.

Going back to simple basics @rogeralsing, perhaps you can tell us if the communication is actually initiated externally, or, as I suspect, from the containers to Consul (for registration purposes at least). If the latter is the case, you can simply rely on Docker source NATting the container to Consul traffic.

Of course, what IP are those containers going to declare themselves as having and how do you configure routing on your external hosts if the 10.42 network is available through multiple endpoint physical hosts. I’d imagine some dynamic routing would do the trick, easy on your external hosts or network equipment, not so much on the Rancher ones. I’m sure you could do something with Quagga or VyOS by querying the metadata service but it won’t be easy.

Is all that worth it to avoid some port mapping? Are containers and Rancher really what you’re looking for or what you need? Perhaps the effort would be better spent moving your external services into containers? Or using a proxy with suitable application level routing?