I want to check I’m not doing something crazy here. We have a stack that consists of a loadbalancer (lb), pointing to an nginx container (web), that has DB (db) and PHP (api) containers as sidekicks.
When run locally using links (without the lb container, or with a haproxy replacement) the web container can resolve api fine as ‘api’ via its hosts file.
When run on rancher with web as the primary and ap i& db as sidekicks (without links, as suggested), the web container gets into a fail/restart loop as it cant resolve ‘api’. (The error is ‘host not found in upstream “api”’).
However if we use api.web to refer to the api sidekick inside the web container, the api container can be resolved. This messes with a lot of config files that are depending on api being ‘api’ whether run locally, on docker cloud, or on rancher.
Is this the intended behaviour (I swear a few months ago I was running setups like this without issues)? If so, how can it be circumvented? I considered reading out the api.web IP address and adding it to a line in etc/hosts on startup, but that feels very messy.
I’ve posted a simplified compose file below:
web:
image: nginx public
expose:
- 80
volumes_from:
- api
labels:
io.rancher.sidekicks: api,db
io.rancher.scheduler.affinity:host_label: client=apiv1
api:
image: api image (based on php public)
expose:
- 9000
labels:
io.rancher.scheduler.affinity:host_label: client=apiv1
db:
image: 'mysql:latest'
expose:
- 3306
labels:
io.rancher.scheduler.affinity:host_label: client=apiv1