Rancher 1.2 issues

Hello,

I have deployed rancher 1.2 recently and noticed a lot of changes from the previous version. Could someone help me to address the following queries?

  1. If an application is deployed via rancher 1.2, its port mapping details are not showing while using docker cli(docker ps). Docker inspect doesnt have that information and have to go to rancher to find out the host port where its mapped. Is it normal behavior?.. is there anyway to find out this info from the docker cli?
  2. Also docker private network(172.17 range) IP also not getting assigned to containers while deploying through rancher catalogs, though linking is done using the rancher private IPs. However, is it possible to assign those docker private IPs too? Atleast, can we see the rancher IPs using any docker or other commands from the commandline, instead of getting into the container?
  3. Many stacks are auto deployed while creating the rancher server, thereby creating atleast 7 containers in each hosts which appears to unpleasent when we look into (docker ps)… Are all these containers required for the proper networking? how to change the behavior of the stack by redeploying, like creating a different subnet? Can we just delete the stack and then bring it as a private catalog for deploying it newly? How will it affect the current networking?
  4. Containers deploying through rancher catalog has the name “r-stackname-servicename-randomcharacters”. I had tried to override it using container_name in the docker-compose.yml, but it failed too. Do we have anyother possibilities to create the container with a different name?

Thanks
Adarsh R

1 and 2 are a result of using CNI for networking, the container is actually run with --net none. Technically the ports reported were never necessarily correct because we’ve always allowed you to change the host port on running containers and docker does not, so it would not be reflected in ps.

  1. Yes they are needed for the standard functionality. They are split up so that you can choose other implementations of e.g. networking. See http://docs.rancher.com/rancher/v1.2/en/faqs/troubleshooting/ “the subnet being used…”

  2. User-provided container names don’t make sense when they have to be unique with more than one instance on a host, during upgrades, etc. Generally once you’re using rancher people don’t actually use native docker very much. The rancher CLI has a ps that displays services.

@vincent, as per the rancher docs, http://docs.rancher.com/rancher/v1.2/en/rancher-services/networking/ , it says that docker bridge IP(172.17.0.0/16) will also get assigned to containers. See below

Using Rancher’s IPsec networking, a container will be assigned both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. Containers within the same environment are then routable and reachable via the managed network.

Indeed, that’s outdated (@denise).

We’ve updated the docs with how networking now works in the new post 1.2 world.

http://docs.rancher.com/rancher/v1.4/en/rancher-services/networking/