Destination Host Unreachable - AWS VPC Nat routing

Hi,

I am using rancher, rancher os, AWS VPC using a private address range and the aws NAT service to route to the internet.

Even though I have links set-up I get Destination Host Unreachable when containers on different machines try to access one another. This all works if the machines have an internet gateway and a public IP.

How should I set this up so the links work, or is the issue still outstanding?

Seeing this post rancher appears to be assigning the NAT service ip rather than the machine IP.

Nat service ip is 52.209.232.125

Below are sample compose files.

Thank you,

Peter

docker-compose.yml:
redis:
image: redis

pinger:
image: ubuntu:14.04
command: ping redis01
links:
- redis:redis01

rancher-compose.yml
pinger:
scale: 7

Is Rancher inside the same VPC? Pay attention to the IP address behind the hostname you use for Rancher. The network that that IP sits on will define the IP that is used to communicate with Rancher, and also with other nodes. If you wan to use your 10.0.* network, then use a 10.0.* IP to connect to Rancher.

I can see that your hosts are described as 52.209.* which means they have come up with “public” IP addresses. Therefore, traffic between hosts will be routed between them over public IP addresses, which is I bet exactly what you don’t want.

Yes the Rancher manager is in the same VPC. It is also in the 10.0.* range and in a different subnet that has the same nat service.

Yes both hosts have the ip 52.209.232.125 in the UI but this is the nat service ip. Their IP addresses are 10.0.100.196 and 10.0.101.170

The other article from last year suggests that rancher is picking up the nat ip and trying to bind to that address for the ip sec network. Obviously that doesn’t work. How can I change the configuration so it does not pick up the nat service ip address?

Thank you

When you first register the host with Rancher, use a 10.0.* IP to reference the Rancher host. That will cause Rancher to use a 10.0.* IP for your hosts, which will make networking work. I.e. don’t use a public IP/hostname for Rancher itself when registering hosts.

Hi,

Thank you. I am registering the host using cloud-config in user data. See below. I think you saying I need to use an internal ip address for the rather than the one have have set-up in Route 53. Is that correct?

Thank you

#cloud-config
rancher:
  services:
    register:
      priviledged: true
      volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      image: rancher/agent
      command: https://<rancher-server-ip>/v1/scripts/<registrationToken>

Hi,

I have changed access the ha proxy machine to allow access internally access but the results give the external address as the cattle url. Is this the correct approach?

#!/bin/sh

export CATTLE_REGISTRATION_ACCESS_KEY="registrationToken"
export CATTLE_REGISTRATION_SECRET_KEY="<Key>"
export CATTLE_URL="https://<external-dns-address>/v1"
export DETECTED_CATTLE_AGENT_IP="10.0.102.36"

You need to find a way to provide the internal IP for Rancher in that CATTLE_URL parameter.

The shell script above is the results from https:///v1/scripts/

So I will create a new internal route 53 record for the internal address and parse the results of the request above. I don’t know rancher’s cloud-config well enough so I will go back to ubuntu and script it there.

Thank you