Firewall rules for managed network

Hello,

Could you specify the list of TCP/UDP ports to open between different hosts (rancheros host) on different networks to allow rancher managed network working ?

Bastien

UDP 500 and 4500 between all the hosts (docs)

1 Like

Thank you, I’ve missed this part of the doc.

Hi Vincent,

Despite rules settings on our firewalls (as described by the doc) the VPN traffic doesn’t work.

This is related to Rancher managed network mode, in the doc we find :

Under Rancher’s network, a container will be assigned both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. Containers within the same environment are then routable and reachable via the managed network. 

But in enterprise environment, the network is traditionally split in zones (DMZ, PROD, etc…).

In our case, we have :

  • one rancher host in DMZ zone with an IP like 10.13.0.1/16
  • one rancher host in APP zone with an IP like 10.14.0.1/16

Both host use a default gateway that act as firewall, and we have setup the rules described in Rancher doc (500/UDP , 4500/UDP) on this firewall.

The problem is the IPSec negotiations is made with “Network Agent” container IP (10.42.X.X).
This network could be routable between security zones because this is the same network on each zones, and then could not be filtered.

One possible fix could be to use the host IP to negotiate the IPSec VPN between “Network Agents” (by NAT or network privileges).

The point-to-point IPSec connections between network agents are made from the source network agent to the destination using the host IP (as shown on the host screen of the UI) of the host the destination agent is on. They can’t use 10.42 addresses to talk to each other because they’re the ones setting up cross-host communication for the 10.42 IPs in the first place.

Almost every time people have issues like this there’s a firewall rule that was missed or similar, or the hosts do not have mutually reachable IP addresses configured. I would stop docker/the network agent and test connectivity with something like netcat (nc).

Thank you Vincent,

But as seen on Firewall logs, it seem the source network agent use the host IP as source, but try to speak with the network agent IP as dest.

2015-09-25T17:02:07+02:00 10.255.255.253 RT_FLOW: RT_FLOW_SESSION_DENY: session denied 10.13.1.16/1024->10.42.47.247/500 junos-ike 17(0) default DMZ Internet UNKNOWN UNKNOWN N/A(N/A) reth1.900 UNKNOWN policy deny
2015-09-25T17:02:17+02:00 10.255.255.253 RT_FLOW: RT_FLOW_SESSION_DENY: session denied 10.13.1.16/1024->10.42.47.247/500 junos-ike 17(0) default DMZ Internet UNKNOWN UNKNOWN N/A(N/A) reth1.900 UNKNOWN policy deny
2015-09-25T17:02:27+02:00 10.255.255.253 RT_FLOW: RT_FLOW_SESSION_DENY: session denied 10.13.1.16/1024->10.42.47.247/500 junos-ike 17(0) default DMZ Internet UNKNOWN UNKNOWN N/A(N/A) reth1.900 UNKNOWN policy deny
2015-09-25T17:02:37+02:00 10.255.255.253 RT_FLOW: RT_FLOW_SESSION_DENY: session denied 10.13.1.16/1024->10.42.47.247/500 junos-ike 17(0) default DMZ Internet UNKNOWN UNKNOWN N/A(N/A) reth1.900 UNKNOWN policy deny

Hello Vincent,

For info, I just testing again after a fresh install of Rancher 0.40 (server and clients on all nodes).
I got the same issue.

Link is broken above - here is the link to latest docs

Is UDP 500 and 4500 only needed between agent hosts?

do they need to be opened between master and agent

@Shuliyey ports 500/udp, 4500/udp need to be opened on the cluster hosts, not the server(a.k.a master) node. The only port that needs to be opened on server node is 8080/tcp if you are using the default or any_other_port_you_chose/tcp. In case you are using certificates, this will change to 443/tcp.