Can't resolve simple container names within sidekick structure

I want to check I’m not doing something crazy here. We have a stack that consists of a loadbalancer (lb), pointing to an nginx container (web), that has DB (db) and PHP (api) containers as sidekicks.

When run locally using links (without the lb container, or with a haproxy replacement) the web container can resolve api fine as ‘api’ via its hosts file.

When run on rancher with web as the primary and ap i& db as sidekicks (without links, as suggested), the web container gets into a fail/restart loop as it cant resolve ‘api’. (The error is ‘host not found in upstream “api”’).

However if we use api.web to refer to the api sidekick inside the web container, the api container can be resolved. This messes with a lot of config files that are depending on api being ‘api’ whether run locally, on docker cloud, or on rancher.

Is this the intended behaviour (I swear a few months ago I was running setups like this without issues)? If so, how can it be circumvented? I considered reading out the api.web IP address and adding it to a line in etc/hosts on startup, but that feels very messy.

I’ve posted a simplified compose file below:

  image: nginx public
    - 80
    - api
      io.rancher.sidekicks: api,db
      io.rancher.scheduler.affinity:host_label: client=apiv1
  image: api image (based on php public)
    - 9000
     io.rancher.scheduler.affinity:host_label: client=apiv1
      image: 'mysql:latest'
        - 3306
    io.rancher.scheduler.affinity:host_label: client=apiv1

What version of Rancher?

1.1.3 currently, though we also saw the same behaviour with previous versions (definitely as early as 1.1.1).

Heres the results from the hosts & resolv.conf file and some test pings

I think this issue is related - I’m using alpine linux as well but haven’t tested the workarounds yet:

@bferns I just came across this same problem. I had to end up sticking together a few containers in sidekicks (because I had to “volume_from” and all of a sudden everything broke. It took me a bit to figure out the problem was due to resolution and a quick search on Google linked this thread.

The good news is that I tried the workaround you linked and it worked (I basically added as a search dns parameter the name of the main container to all of the sidekicks (and they now can resolve each others by container name).


It helped me to use the notification mentioned in

I have a service called projects running a (alpine) nginx and a sidekick called redmine. So I have set a directive
proxy_pass http://redmine.projects:3000;
in my nginx container.