Container continuously restaring

Hi everyone.
I have just installed Rancher (v 1.5.9) and added a host (rancher/agent:v1.2.2).
As first test, I’m trying to deploy a 3 container stack (db, app, and proxy) using a docker-compose file.

The db and app containers start fine but the proxy one continues to restart.

Here is the service declaration on compose file

proxy:
    image: inforlife/nginx
    command: nginx -g 'daemon off;'
    volumes:
      - /usr/docker-share/proxy/ssl:/ssl
    ports:
      - 80:80
    links:
      - app
    depends_on:
      - app
    network_mode: "bridge"

On the host I have the /usr/docker-share/proxy/ssl directory with the key and pem certificate.

Here is the log

12:43:26 PM	INFO	service.update.info	Service reconciled: Requested: 1, Created: 1, Unhealthy: 0, Bad: 0, Incomplete: 0	
12:43:24 PM	INFO	service.instance.start	Starting stopped instance	
12:43:24 PM	INFO	service.update.wait (2 sec)	Waiting for instances to start	
12:43:24 PM	INFO	service.trigger	Re-evaluating state	
12:43:24 PM	INFO	service.trigger.info	Requested: 1, Created: 1, Unhealthy: 0, Bad: 0, Incomplete: 0	
12:43:24 PM	INFO	service.update (2 sec)	Updating service

Every 3/4 seconds it restarts.

Any ideas what is going on?

Thanks and have a nice day.

The container is (itself) exiting, and then getting restarted because the policy says it’s supposed to be running. The logs (of the container(s), not the service) may have more info.

Hi Vincent,
Thanks for your reply.

The container is running and the container logs host not found in upstream "app" in /etc/nginx/conf.d/default.conf:33.

From my understanding, the link: app in the compose file should make it work. Am I missing anything here?

Thanks.

That looks like an nginx config problem. Does your nginx config point to the name of the app container? Can you post the entire compose and the nginx default.conf?

For non-managed networking you need to opt-in to get Rancher DNS with the io.rancher.container.dns label set to true.

We don’t support depends_on (and neither does Docker in Swarm) because it doesn’t really solve the problem. Systems in a distributed system with dependencies need to be able to either wait for them or exit and be re-launched later (which is what’s happening here).

You also need to set a resolver for nginx to work properly, or else the name only will be resolved exactly once on startup and never reflect the changing list of container IPs. Rancher’s is always 169.254.169.250. https://www.nginx.com/blog/dns-service-discovery-nginx-plus/

Thanks for your replies.
The app is already running on production using directly Docker compose since several weeks without issues so, I don’t think it is a Nginx config issue.

Where should set io.rancher.container.dns to true? In the rancher-compose file?

Regarding the resolver, do you mean I have to add resolver 169.254.169.250 valid=10s; at the top of my Nginx config file? If I weel understand, that will make the container work only with Rancher. Correct?

Thanks again for the support and have a nice day.