Can't resolve simple container names within sidekick structure

I want to check I’m not doing something crazy here. We have a stack that consists of a loadbalancer (lb), pointing to an nginx container (web), that has DB (db) and PHP (api) containers as sidekicks.

When run locally using links (without the lb container, or with a haproxy replacement) the web container can resolve api fine as ‘api’ via its hosts file.

When run on rancher with web as the primary and ap i& db as sidekicks (without links, as suggested), the web container gets into a fail/restart loop as it cant resolve ‘api’. (The error is ‘host not found in upstream “api”’).

However if we use api.web to refer to the api sidekick inside the web container, the api container can be resolved. This messes with a lot of config files that are depending on api being ‘api’ whether run locally, on docker cloud, or on rancher.

Is this the intended behaviour (I swear a few months ago I was running setups like this without issues)? If so, how can it be circumvented? I considered reading out the api.web IP address and adding it to a line in etc/hosts on startup, but that feels very messy.

I’ve posted a simplified compose file below:

web:
  image: nginx public
  expose:
    - 80
  volumes_from:
    - api
  labels:
      io.rancher.sidekicks: api,db
      io.rancher.scheduler.affinity:host_label: client=apiv1
api:
  image: api image (based on php public)
  expose: 
    - 9000
  labels:
     io.rancher.scheduler.affinity:host_label: client=apiv1
db:
      image: 'mysql:latest'
      expose: 
        - 3306
      labels:
    io.rancher.scheduler.affinity:host_label: client=apiv1

What version of Rancher?

1.1.3 currently, though we also saw the same behaviour with previous versions (definitely as early as 1.1.1).

Heres the results from the hosts & resolv.conf file and some test pings

I think this issue is related - I’m using alpine linux as well but haven’t tested the workarounds yet:

https://github.com/rancher/rancher/issues/5041#issuecomment-245380438

@bferns I just came across this same problem. I had to end up sticking together a few containers in sidekicks (because I had to “volume_from” and all of a sudden everything broke. It took me a bit to figure out the problem was due to resolution and a quick search on Google linked this thread.

The good news is that I tried the workaround you linked and it worked (I basically added as a search dns parameter the name of the main container to all of the sidekicks (and they now can resolve each others by container name).

Thanks!

It helped me to use the notification mentioned in https://docs.rancher.com/rancher/v1.2/zh/cattle/internal-dns-service/

Example:
I have a service called projects running a (alpine) nginx and a sidekick called redmine. So I have set a directive
proxy_pass http://redmine.projects:3000;
in my nginx container.