Ok i’m having major weirdness with rancher-compose up --upgrade.
I have a simple service, and I am using Rancher v 1.0.0. I’m trying to upgrade it.
Here is my docker-compose.yml:
testservice:
ports:
- 8080:8080/tcp
environment:
APP_CONFIG_URL: http://consul:8500/v1/kv/test-service-config?raw
JVM_CONFIG_URL: http://consul:8500/v1/kv/test-service-container-config?raw
external_links:
- consul/consul:consul
- TestSingleHostNameStack/redis:redis
labels:
io.rancher.scheduler.affinity: "container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}"
io.rancher.container.pull_image: always
tty: true
hostname: testservice
image: registry.colinxdev.com/test-service:DEV
stdin_open: true
And rancher-compose.yml:
testservice:
scale: 2
health_check:
port: 8080
interval: 2000
unhealthy_threshold: 3
strategy: recreate
response_timeout: 2000
request_line: GET /test-service/api/heartbeat HTTP/1.0
healthy_threshold: 2
upgrade_strategy:
start_first: true
I start with a working set of two services, one on each of my nodes. Then, when i upgrade using this command:
rancher-compose up --force-upgrade --pull
A very odd sequence happens. First, new copies of the containers start. but then they stop. and then start. and then stop. Forever, until i go into the GUI and choose ‘cancel upgrade’.
Then, an even more bizarre thing happens. When i choose ‘finish upgrade’ after cancelling, THEN the new containers start successfully, and the old ones are shut down.
this doesnt make any sense to me at all, but i’ve been able to dupilcate the sequence of events several times.