Cannot ping containers on host in other DC

Hi guys,

I’ve searched and found very little info, including a closed issue but quite old… anyways, I have a VM on digitalocean running rancher-server (rs), plus 2 other hosts on digitalocean (do1 & do2).

I have a third host on my local machine, which I provisioned just as a docker-machine with virtual box and added it to the rancher server (local1).

I see all hosts fine and can start containers on all. Nonetheless, I am unable to ping the private IPs across datacenters…

I’m able to ping any container running on do1 from do2, using the container private ip (10.42…)… nonetheless, I’m unable to ping any container on local1 (again using the internal ip), and I’m unable to ping any container on do1 or do2 from local1 (again using 10.42 address).

I checked the network agents and all seem to have the same logs, nothing erroring out or standing out particularly on the local1 node, etc… How can I go about debugging this?

What I am trying to achieve is:
Have a rancher node within our private datacenter, part of a larger environemtn with machines in multiple providers, and use an external service that is set to run only on the node in our datacenter (specifying a label and affinity), and therefore get access from any container to a service running in our private datacenter, no matter where the connecting container is residing… (I am a bit far from this, still trying to establish IP comms… hostnames and links etc would come second).

thks

Pay attention to the IP address that you use to refer to Rancher - particularly when starting the rancher-agent. If, for example, you use a public IP address, Rancher will route traffic over your host’s public IP (that is, the 10.42.* traffic will travel over a VPN over the public IP). If you want it to use a private IP, connect to the Rancher server via its public IP.

Not sure that this is a full answer, but hopefully it gives you something to work with.

Hi,

I apologize if I’m misunderstanding the concepts… but here is a bit of a clearer picture of what I am trying to do, and what I’m getting…

My idea is to have rancher hosts on different zones/datacenters, and use rancher’s networking features to make the access “transparent”. Below is a sketch of such architecture in as simples terms as possible.

Therefore, I assume I need to add an external service in docker so that any phpmyadmin container can access the DB server. I am able to deploy the first TEST Server (in DC-1), and use the internal 192.168.55.10 ip to access the DB server, as they are on the same datacenter and the server can access the DB server, so the container also accesses it. What I am trying to do is to leverage the VPN between rancher hosts to access a server outside of rancher vpn. Basically I any phpmyadmin container to access the DB-SERVER…

I tried adding a external service, and linking it… but I cant even get it to work from Server 1, much less server 2… I tried adding a Load Balancer linkd to the external service, and still no go… and also a regular service alias… I think I’m missing something? Also, I should somewhere define that the external service needs to be on a particular host or set of hosts, no?

Below is a bit of all I tried… lol… I have temporarily disabled SERVER 2, so anything should be “local” within the same network…

dbsvc = external service link to 192.168.55.10
mydb = service alias to dbsvc (tried using that before adding a loadbalancer with an affinity rule to that Datacenter)
lbdb = a loadbalancer that is linked to dbsvc exposing 3306/tcp->3306
lb-loc = a loadbalancer to expose phpmyadmin 80/http-:phpmylocal:80

I can access phpmyadmin accessing lb-loc. It brings up phpmyadmin. I can login to the DB server if I use the “dmz ip”, this works (but only works from servers in that DC). It wont work if I try dbsvc, lbdb or mydb (when mydb was running).

Also, if I try going into the console of phpmyadmin container, I can ping the DB Server (192.168.55.10), but dont get any response when I ping mydb, or dbsvc or lbdb… These are all in 1 stack, and as I said I have 2 servers but have currently deactivated the “remote” one, to try and get it working locally at first.

Any ideas? Is this what external services are for? Or am I missing the point? I assume I could run a haproxy on a machine, but was wondering if I can achieve the same with rancher native features

Cross-host communication is done directly between the source and target hosts using the IP address that they are registered as (and is shown on the host in the UI). So those need to be mutually reachable. The network agent on Test Server 2 (159.200.x.y) can’t open a connection to 192.168.55.11 to reach Test Server 1.

Understood, that makes sense… My question is about what external services do, and why I cant get Test Server 1 to reach the 192.168.55.11 server using an external service (assuming thats what its for?)

What is the suggested setup to reach a server outside rancher? Would it be to use a proxy container (nginx, haproxy or similar) on a rancher host in the same datacenter/network as the external service?

“Service Alias” and “External Service” are essentially just DNS records. e.g. if you use Amazon RDS, you can create a service called “db” that points to the IP/name they gave you, so that your application can refer to “db” instead of coding “ip-1.2.3.4-rds.us-west.amazonaws.com” into your application. They don’t magically make something available externally.

If the 192.168.55.11 server has no public IP then you need something like a Load Balancer (or roll your own proxy container) scheduled to run on a host that does have a public IP and target at the service that is running on the private-only host.

Understood, now it makes a bit more sense for me! thanks! :slight_smile:

I understand how it can be used in a single DC / public IP scenario, I just thought (wrongly) for a moment that it would leverage more than DNS (i.e. Rancher LB) in order to give the service an “internal rancher” (10.42.x.x) ip.

I will configure something above an haproxy container to do this, no biggie…

Nonetheless, I’m still unable to get it working inside the same datacenter… basically the container can access the IP directly, but wont resolve the external service name to that same ip… (which I understand is what it should do…)