Is there a way to disabled NAT/MASQUARADE between the overlay network and a local network ?
I’d like to be able to route some services directly ifrom my localnet to the overlay network without port forwarding.
Right now it fail due to the MASQUARADE/NAT.
So I’d like to customize the netfilter rule on the nat table and the preprouting chain to exclude my localnet from being masquarade.
I know how to do it manually (deleting and recreating the rule by hand) but do you know if it’s possible to do it directly by conf ?
What version of Rancher are using? The new v1.2.x has the new infra stuff and the networking is different from the previous releases.
Could you give more information about what you are trying to do? It’s not very clear when you say ‘local’ network. Local Docker network? Or do you have another bridge? Or do you have another network interface on the host?
I’ve both networks:
- 10.42.0.0/16: rancher overlay network
- 10.99.0.0/16: private network in a vlan between rancher and non-rancher hosts
my use case:
I’d like to communicate between both networks without doing NAT.
So I’ve configurer rancher to use an external DNS service (powerdns), and I added static route in routers in the core network to route IP network 10.42.0.0/16 to the closest rancher host (unicast).
For testing I’m just using ping (icmp):
- it’s working if I ping a container IP from a rancher-host even if the container is on another rancher-host.
- it’s not working if I ping from a non-rancher host.
If I sniff the network and if I look at the netfilter nat table, we are doing MASQUERADING from all IP packets from the overlay network to other network. And of course, because my “ping” doesn’t use a port forwarding I got:
- icmp request from 10.99.0.0/16 to 10.42.0.0/16 without routing (I see the icmp request going to the container)
- icmp echo reply from 10.42.0.0.1 to 10.99.0.0/16 MASQUERADING from the host where the container is.
So of course the test failed because the icmp echo reply going back with the wrong IP (the one from the rancher-host and not the one from the container).
The way back with masquerading it’s what we are expected normally, but in my use case I don’t want it…
I’m not sure that’s a rancher issue, but maybe I’ve missed something or I could proceed differently ?
The behavior you are describing is by design, it’s not a bug/problem with Rancher.
This is not going to work going forward as this is a bug : https://github.com/rancher/rancher/issues/4324
For your needs, just for the needed services can’t you use ‘HOST’ networking?
And another question: Why is it that you don’t want to use the MASQUERADE behavior?
Thanks for your quick reply.
For your needs, HOST 'networking?
Yes I can, it 's probably because I feel more netops than devop
But if I am right, if I want to use the HOST I have to deploy a LB on all HOSTS in case of my container move from an HOST to another another one is’nt it?
Per exemple if I’ve a LDAP container on HOST A, and then move the container to HOST B, either I can detect the move and I contact HOST B OR I run a LB on all HOSTs to be able to forward requestes to the container isn’t it ?
Using a static IP on the container simplify the process from my perspective if I can route IP packet between both networks.
And another question: Why is it possible to use the MASQUERADE behavior?
the first reason was explained previously.
However the other reason is more network than “OP”. It’s because MASQUARADE is not stateless and from my point of view I prefer stateless behaviour.
I’ve to admit I’m not an export of container/docker tech. I come from sysadmin’s world and I have probably bad reflexes on the micro-services’s architecture, but from the begining with docker (and rancher) I don’t feel safe to go in production without a strong network stack
Last thing, If I initiate a communication from the overlay network to my private network with MASQUARADING I can’t know who really initiate the connection…