Network Configuration, Routing, DNS

Hi!

I’ve been playing around with Rancher and I feel pretty comfortable with it as a container platform except I can’t seem to figure out how to route external requests through FQDNs directly to containers as opposed to just the host IP.

Here’s my setup, in layers:

  1. FreeNAS on bare metal, sitting on the LAN 192.168.0.0/24
  2. RancherOS VM on top of bhyve hypervisor, bridged networking, has static IP 192.168.0.78 on the LAN.
  3. Rancher host, agent operating on RancherOS. Default Docker bridge network docker0 = 172.17.0.0/16
  4. Containers sitting on internal CNI “managed” network 10.42.0.0/16

I have the official rancher catalog bind9 DNS Service deployed as well, and responding at 172.17.0.1:53 or internally within container network at bind9.bind9.rancher.internal:53. I use the rancher catalog service dnsupdate-rfc2136 to populate bind9 with container names & mappings.

Now, the goal of all this is so that I can deploy multiple copies of the same container image that have different names (let’s say, internally, user_stack.container1.rancher.internal, user_stack.container2.rancher.internal, etc…), set port mappings to be randomly assigned, but then address them from the outside using the FQDNs in bind9 and use the original ports exposed by the container image. So, essentially, what I’d want it to (sort of) look like is either:

user_stack.container1.my_domain:9000 -> ??? -> 10.42.111.222:9000
user_stack.container2.my_domain:9000 -> ??? -> 10.42.444.555:9000

(Where the question mark represents some type of NAT/routing configuration that forwards the packets through 192.168.0.78 -> the managed network)

OR

user_stack.container1.my_domain:9000 -> 172.17.111.222:9000
user_stack.container2.my_domain:9000 -> 172.17.444.555:9000

In which case I need the containers to get addresses on BOTH the managed AND bridged networks. As far as I can tell, this is no longer possible due to CNI?

Again, assume that I do NOT want to access all my containers through arbitrary ports on the host IP, I’d like to be able to use standard ports and DNS for routing.

What am I missing?

Thanks,
-Tim

If the services are http then this is what routing rules do on a load balancer. The balancer is what listens on 9000, the containers publish no ports at all, and the balancer routes the request based on the host header to the appropriate service.

If they are not (i.e. arbitrary TCP/UDP) then there is no way to differentiate requests to the same IP for service1:9000 vs service2:9000. There is no hostname in an IP packet. So every container needs a unique published IP (on 192.168.0/24 if you want them accessible outside the VM) and you’re back to basically publishing + IP scheduling (which is in 1.5+).

Hm ok. The actual services are arbitrary TCP/UDP (and even have multiple ports) and I suppose you’re correct, ultimately what I want is unique published IPs on 192.168.0.0/24… or as I said, the 172.17.0.0/16 network being routable externally (which in my case works fine, I can ping rancherserver 172.17.0.2 from outside, for example).

Setting up bridged published IPs with a docker macvlan is a no go due to CNI as I understand, but what exactly do you mean by publishing+IP Scheduling? The docs don’t seem to address my use case.

Thanks for the quick response, btw!

OK, I’m so close!

I set up a range of available IP addresses in my cloud config on RancherOS, and then added the same range to my host under Infrastructure/Hosts/Edit Host…

This resulted in Rancher correctly allocating these host IPs to my containers and handling the port mappings properly:

container1 = 192.168.0.95:9000:9000
container2 = 192.168.0.96:9000:9000

After this, I can access either container through these IPs. I also verified the correct routing by doing the following on the host:

sudo iptables -L -v -t nat

Now there’s only one more bit I need. Right now the external DNS (through dnsupdate-rfc2136) is still being updated with the overall host IP, instead of the scheduled/published IP of the containers. So if I do this from outside somewhere:

host user_stack.container1.my_domain 192.168.0.78

(where .78 is my host & DNS is mapped to port 53)

Then I get:

Using domain server:
Name: 192.168.0.78
Address: 192.168.0.78#53
Aliases: 

user_stack.container1.domain has address 192.168.0.78

Instead, I obviously want it to store 192.168.0.95 as per the IP scheduling in Rancher.

What am I missing?

Cheers,
-Tim

I figured it out! After doing some github digging, I discovered that my use case was fixed in rancher/external-dns:v0.6.4 and I had rancher/external-dns:v0.6.2. After upgrading, it all works!

Cheers!
-Tim