Rancher Networking Architecture vs. libnetzwork Docker 1.9

Hello RancherLabs,

first of all, sry for my bad english skills.

I read that docker will release its own “multi-host Networking” solution in the release 1.9. They will realize that with libnetwork.

Actually Rancher is provding the same “feature” via “ipsec vpn tunneling”.
So you will change your implementation from the “home brew” to the “docker standard”?

What is the difference between the “libnetwork solution” and “your solution”?


I looked at Weave, Calico and OpenVswitch as docker plugin.

Weave is an easy to deploy vswitch as a docker container, but with some limitations (plugin looses configuration after restart, but networks registered at docker network, only one subnet can be added). I started testing with weave inside a privileged container and that changed permissions for (user-)docker on RancherOS. No docker command executed without “sudo”. I rebooted to get the normal behavior (user-docker as rancher user) back.

OpenVswitch is a powerful but complex solution I haven’t used so far.

Calico is a vRouter solution. With my test setup create a profile (network) and append containers to it looks good, but I can’t ping containers inside the same profile / network. The icmp packet reached the host, but it isn’t routed to the destination container. Containers get a /32 ip address and you can’t use the same ip twice (because it would be a routing conflict, calico will deny duplicates). Really quick test on CentOS7 with Docker 1.9 (dev) yesterday.
I like calico, but you can’t build independent container networks for different customers with the same subnet.

Is there a documentation how rancher network works and how it could be used with custom network needs (manual assigned ips, different service / stack subnets, …)?
I found this discussion about custom network configuration here in the forum.

Hi pwFoo,

thank you for the information.

We also did it with weave, it was very easy to implement and it was working.

But if i use Rancher i’m using it because i do not wand to handle the “networkstuff” on my own.
RancherLabds already implemented an own “multi host networking solution”. My question was:
Is RancherLabs thinking, or maybe already planning, to use the implementation of libnetwork and “throwing the homebrew solution” away or is the homebrew solution still better and they keep it.



The docker team recently release docker 1.9 with networking support and persistant storage. So I’m also interested of how rancher plan to support the docker way (or not) for the networking part.

Indeed I’m testing rancher since several months now and I’m quite happy with it. But for long term I prefer to be closed as possible to docker for low level part of the containers (like the networking part).

Rancher team have you some visibility about supporting (or not) libnetwork to manage the rancher overlay network ?



Docker 1.9 networking is a culmination of a lot of work and discussion that we have been involved with for the last year. We will be refactoring Rancher networking to be an actual 1.9 network driver. This means out of the box Rancher will still use our own networking, but it will be done as a proper network driver. We are investigating supporting other network drivers, but this will most likely be on a case by case basis.

As we refactor to take advantage of 1.9 networking you will not see a huge change from the user perspective. I’m guessing we will add the ability to create multiple networks, but honestly this has not be a highly requested feature.

We are committed to being as native Docker as possible.


Darren, thank you for this. One point that I always bring up when discussing the reason for why we choose Rancher over the alternatives are how transparent you are with docker, rather than having docker be an implementation detail buried deep down out of reach from the users.

So I’m very pleased to hear that that is a trait you are committed to uphold, and want with this simply let you know how appreciated it is, that you keep it like that :smile:

Thanks you for your answer.

Last question, closed to the topic (I guess), I try to manage to do inter-networking between an physical networks and the managed rancher networks.

Is it possible ? I don’t need filtering or complex netwoking, just to be able to reacher a container in the managed network from the outside using only private IP.

@renaudManda I’m pretty sure if you can route to any Rancher managed host we will route the traffic across the private network to the host where the container is. I haven’t really tested that much, but in theory you should be able to route traffic between the two. Most users have use VPN containers to do this, but I think raw routing will work.

@ibuildthecloud thanks for your feed back but from my side it’s not working.

Now with Rancher v1.0 I’m really closed to step forward on the full rancher environment to manage my small business. But because I came from the netops world I always need to be able to do everything I wish with my networks.

So now I’ve a private networks used by VM/servers outside the rancher world abd I wish to be able to communicate in both side from the app on the rancher managed network ( with my outside private network without NAT (so in pur routed mode).

Do you think it’s possible today ?

I seen an openvpn app in the rancher catalog which pretend to give access to the rancher managed network from the outside using openvpn. Is it the right way to do ? (there gonna be overhead but whatever the traffic is really low).