Rancher auto deploy? (CI/CD workflow)

Bascially I want to do CD/CI with the inclusion of rancher.

Usual CI/CD workflow follows something like this:

  1. make code changes
  2. push to git (gitlab)
  3. gitlab publishes webhook
  4. jenkins builds project
  5. we scp project files to server

I want to replace step #5 with docker + rancher, so that I can automate the full process:

  1. make code changes
  2. push to git (gitlab)
  3. gitlab publishes webhook
  4. jenkins builds project + creates docker image
  5. jenkins pushes docker image to registry
  6. SOME MAGIC HAPPENS and my image is deployed to a server.

I am at my wits end with step #6…I want to essentially automate the Rancher deployment (i.e i click a button for stack, and rancher handles the scheduling for me).

What i dont want to do is ssh into a server, and do docker pull

Any way to get the above things done?

1 Like

you can use the rancher REST API to deploy from Jenkins via rest calls. take a look into ranchers API documention.

essentially step #6 would be a Jenkins job that makes a rest call to rancher specifying some information about the image to use, ports to expose, etc. and rancher would handle pulling down the image and starting it up as a container.

@warryo do you know where this documentation is?

The one i saw doesnt seem to have much as to point in which endpoint to choose

Are you locked into using Jenkins for this?
I ask because the open source version of Drone (https://github.com/drone/drone) is all docker based and is very good in my experience in performing the type of pipeline you are talking about (specifically, a repo centric process). It has plugins for building docker images, publishing them and for talking to Rancher (http://readme.drone.io/plugins/rancher/)

You should take a look at that.

We’ve successfully used rancher-compose in the build script/Makefile inside of Gitlab CI server to do just that. Also, the new rancher cli might be of use.

could you share your workflow? or atleast how you went about it

I have this workflow setup for Atlassian Bitbucket and Bamboo, but the concepts will apply to whatever tools you are using.

I create a custom docker-compose and rancher-compose for my “stack”, I store those in a Bitbucket project for my stack. They contain the info about my service, link to my container reg, and the LB I want created.

Create an api key for each Rancher env you want to deploy to.

I deploy the stack from a job in Bamboo that is configured to hit the Rancher server endpoint for the env I want the stack in (I only do this to get the initial stack created)

Whenever the service needs to be updated (regardless of the env - dev/qa/stag/prod) we simply hit the “upgrade” button for the service in the stack in Rancher (we have “always pull” on, and update the image tag if needed to install a specific image - myreg/myservice:custombuild-0.1.1-test)

As for scheduling, i use host labels that are mapped in the stack config (env=dev, app1=dev-api, app2=dev-api2, etc)

Examples -

docker-compose

`dev-api:
environment:
DATABASE_URL: postgis://youwish.rds.amazonaws.com/?
CENSUS_URL: postgis://goodluck.rds.amazonaws.com/?
SECRET_KEY: it’s a secret!
CONTAINER: lighting_dev
jwt_verify_expiration: false
DJANGO_SETTINGS_MODULE: lightning.settings
CACHE_URL: nicetry.cache.amazonaws.com:11211
ENV: dev
ALLOWED_HOSTS: '*'
PORT: '6969’
log_driver: ''
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: 'true’
io.rancher.scheduler.affinity:host_label: env=dev
tty: true
log_opt: {}
image: myreg.dkr.ecr.us-east-1.amazonaws.com/myimg:trythis-0.0.1
stdin_open: true
dev-lb-lightning:
ports:

  • 6969:80
    tty: true
    image: rancher/load-balancer-service
    links:
  • dev-api:dev-api
    stdin_open: true`

rancher-compose

dev-api: scale: 1 dev-lb-lightning: scale: 1 load_balancer_config: haproxy_config: {} health_check: port: 42 interval: 2000 unhealthy_threshold: 3 healthy_threshold: 2 response_timeout: 2000

In my Bamboo deploy project -

  1. grab compose files from Bitbucket project
  2. run this script on deployment server (or somewhere you can reference the rancher-compose binary) -

/usr/local/bin/rancher-compose \ -r rancher-compose.yml \ -f docker-compose.yml \ -p lightning-api --debug \ up -d --force-upgrade --pull --batch-size "1"

Env variable to map to the correct Rancher env -

RANCHER_URL=http://nothappening:8080/v1/projects/1a9 RANCHER_ACCESS_KEY=xxx RANCHER_SECRET_KEY=xxx

You will then have a full stack created with your service and an LB if needed, again, after that simply hit the upgrade button on the service to pull and deploy the updated image. Let me know if you need any more info.

Edit: I forgot to mention that for completely “automated” deployment, we have enabled a trigger on the Bamboo deployment that can detect that a new image has been built and runs the deployment automatically. I’m not a huge fan of this simply because i don’t like blowing away and re-building the stack (in dev, this would happen around 30+ times/day), I prefer to just update the service within the running stack.

Phillip

1 Like

I stuffed more detail over here Deploying cointainers from Gitlab-ci

Wonderful phil I think this is what i was looking for. just for clarity:

/usr/local/bin/rancher-compose \
-r rancher-compose.yml \
-f docker-compose.yml \
-p lightning-api --debug \
up -d --force-upgrade --pull --batch-size "1"

This code is being run not on your deployment server right? but rather some random server w/ access to rancher-compose?

In our case, we are running rancher-compose on the deploy server, it was just easier to keep everything in the same place, since the “deploy” project is linked to the Bitbucket repo so we can just reference the “compose” files as if they were local.

Also, remember I only use this once for the initial setup of the stack, every upgrade after that is done from within Rancher itself.

hm so arent you by passing ranchers scheduling logic by having it in a single server?

i.e when i launch a stack through rancher UI, rancher auto picks the best server for me. are you bypassing that logic?

The rancher-compose program talks to the server’s api just like the GUI.

I’m not bypassing the scheduling, i’m just using the scheduling in this instance to put that dev service on a specific dev host.

I do the same thing in production, with a slight modification -

labels: io.rancher.container.pull_image: always io.rancher.scheduler.global: 'true' io.rancher.scheduler.affinity:host_label: app=lightning

This way, my prod service gets “scheduled” on all hosts in the prod env that have the “app=lightning” host label.