In-Service Upgrade from CI

I’d like to preform an in-service upgrade (with no launchConfig changes) from a CI pipeline. We are not using docker-compose files to manage our deployments, so the service ID will be configured in the CI Pipeline. What’s the best way to accomplish this?

The rancher up command seems to require {docker,rancher}-compose.yml files to work.

Although there might be a better option you can always export the current configuration before doing an up command.


rancher export stack_name | tar xvf -
rancher up -u -d -s stack_name service_name

--pull or --force-upgrade might be good flags for you if you’re not changing the docker-compose.yml but want them to upgrade anyway.

There might be a better way than this, I’m kind of hoping there is, but its how I’ve done it in the past.

Would this respect any pre-defined Scheduling rules? Currently, rancher-compose.yml does not contain any scheduling information. Many of my services have a specific host on which they must run (HIPAA and whatnot…)

Does your service definition have scheduling rules? If so, they’d be in the labels section of the docker-compose.yml that comes with the rancher export.

Also, what specifically would be changing? Do you have the “always pull the latest image” set? If there are no specific changes, then it might not upgrade. We create hashes of our services so that when you attempt to upgrade, it only upgrades if there are specific changes.

Yes, it does.

However, according to the UI, there is no scheduling information in the rancher-compose.yml file.

how did you schedule them in the first place?

how are you ensuring they are on your specific host?

Does this answer your question?

Also, I’m in IRC as @artisangoose, if you’d like to continue the conversation there.

Okay, that makes more sense. Your issue is this:

The reason why we removed it is because using IDs is not ideal. We’d have to implement something closer to host names, which isn’t ideal either.

Could you add in a host label and schedule using host labels as a workaround? Basically on the host you want to schedule, add in a label and select must have a host label with that label?\

This way, when you upgrade using the CLI, it would be able to keep the same scheduling rules for that host. This would be preferred use case as you would then be able to move those containers onto another host easily by host label.

1 Like

Yeah, we can give this a whirl. Thanks!