Cross host container linking doesn't work (AWS)

Hello

I’m trying to setup rancher on AWS.
My current setup is the following :

  • 1 host with rancher/server:0.51.0 provisioned with docker-machine
  • 3 hosts with rancher/agent provisioned via the rancher UI.

The hosts live in a VPC, in the same subnet and can communicate (SG allows all traffic inside the SG)

When everything is up, the cross host networking doesn’t seems to work. When two containers in a network are on the same host, they can communicate whether when they are on different ones they can’t.

Here is the infrastructure page :

I don’t know where the 172.31.25.41 ip address comes from : it’s not a private or public ip… Sometimes, two hosts have the same IP !
I tryed to re-register the agent by forcing the CATTLE_IP to the private’s VPC IP but it doesn’t work either.

Where can I try to have some logs about the IPSec networking to try to guess what’s going wrong ? Any advice ?

You should be able to re-register the agent and force the -e CATTLE_AGENT_IP to the private VPC ID.

What happens when you try to do that?

Is the 172.31.25.41 IP the docker0 bridge IP?

1 Like

By forcing to the private VPC ID it works, thx. I tried with the public address, tried to separate the rancher server from the VPC, and a lot of other things… but not this one.

Thx for the answer and kudos to your product it’s awesome !