Debugging rancher-compose

I’m trying to automate the process of updating a deployed image, so started experimenting with rancher-compose.

I have two containers that I want to deploy: nginx-demo which depends on flask-demo. Each project/container has its own repo and directory, which contains the associated Dockerfile, docker-compose.yml and rancher-compose.yml (see below)

(The first hurdle was rancher-compose’s dependency on S3; it’s not clear to me why I need an AWS account in the first place to use Rancher (on Digital Ocean) and I was expected any required container image state to be stored in the named Docker registry, but I created the required AWS credentials and set the environment variables.)

At first, “rancher-compose up” seems to succeed, but the UI shows the nginx container constantly restarting. Checking the logs I see that there’s a configuration error.

host not found in upstream "FLASK:5000"

The config was tested locally and works. I.e., “docker-compose up” works locally.

ISSUE 1: The “links” directive in the docker-compose file seems to have worked since in the UI I can jump from the nginx service to the linked flask service – and it has the name “FLASK”.

However, in trying to debug, I make a change to the config (renaming “FLASK”), then rerun “docker-compose build” and “rancher-compose up”.

ISSUE 2: There is no sign of the config changes (i.e., I get the same error in the log). It’s not unless I DELETE the project from the UI and then rerun “rancher-compose” do the changes show up.

ISSUE 3: “rancher compose rm -f” reports “Deleted”, but both services still show up in the UI (both service definitions and instances).

At the very least, Rancher should show the underlying docker commands that are being issued to the server.

flask-demo/docker-compose.yml

flask:
  build: .
  ports:
    - "5000:5000"

nginx-demo/docker-compose.yml

flask:
  extends:
    file: ../flask-demo/docker-compose.yml
    service: flask

nginx:
  build: .
  ports:
    - "80:80"
  links:
    - flask:FLASK

nginx.conf

  upstream flask {
    server FLASK:5000;
  }

What version of rancher-compose are you using? I am using v0.2.1, which was released last week. That version must be used with Rancher Server v0.28.0. You can do a rancher-compose -v to find your version. If you are using rancher-compose v0.1.3, then you may need to try 0.2.1. There were significant improvements to rancher-compose.

I tried to reproduce your problem, but simplified a couple of steps because I didn’t have the dockerfiles. Instead of using “Build”, I just chose the image “richburdon/flask_demo” and “nginx”. I also placed all the files in 1 directory.

Here are my files:

flaskfile.yml

extendedflask:
  ports:
  - "5000:5000"
  image: richburdon/flask-demo

docker-compose.yml

flask:
  extends:
    file: flaskfile.yml
    service: extendedflask
nginx:
  ports:
    - "80:80"
  image: nginx
  links:
    - flask:FLASK

rancher-compose.yml

flask:
  scale: 1
nginx:
  scale: 1

Issue 1/2: These seem to be the same issue (correct me if I"m wrong), but I had done a rancher-compose up. Then I changed the link to flask:FLASK1 and the UI and DNS were updated without any need to delete the project.

Issue 3: I had no issues with services being removed in the UI. Did you try a refresh?

Update: With v0.1.3 and Rancher v0.25.0

Issue 1/2: When I edit the name from FLASK -> FLASK1, at first I see that an additional DNS entry was made for the configuration change instead of having it updated. After a little bit (less than a minute), the DNS gets resolved and I only see my updated name.

Issue 3: My rm -f was successful in removing services from my Rancher instance.

The only difference between my setup and yours is building an image versus using an image from dockerhub. Would you be able to share your Dockerfiles so I could try building locally?

Issue 3: I figured out what the issue is. There is a bug that if the services are stopped before removing, then the services are not removed. I’ve logged a bug for it. https://github.com/rancher/rancher/issues/1598

The fix has been made for issue 3, and just needs to be released.