Service upgrade (image pull) via the API

Hey guys,

I’ve been googling for a few days now and still can’t find a way to force a service upgrade (that triggers an image pull - the actual desired effect) through the API in Rancher 2.

In other words, I can click on a service, click Edit and then Save - Rancher/Kubernetes will pull fresh image from the Docker registry. But I want this to be automated - triggered from my CI/CD pipeline.

2 Likes

Anyone? I have same issue. :slight_smile:

TL;DR: If you put and nothing changes, nothing changes. Add/update a label/annotation with the current time or something and that will cause a change and redeployment, which then follows the imagePullPolicy

We’re currently running Rancher 2.1.1 and face the same issue (still). We’d like to upgrade a workload via API. Currently, we upgrade the workload with the identical request like within the UI (copied from dev-tools and added the Authorization Token). After this, we pause and resume the services in order to trigger the upgrade (via the action links).

I think this is far from best-practice (but it works ;-)).

Most convenient would be some kind of action link to redeploy the app: “?action=redeploy” for instance.

My question is: What is the best-practice way to redeploy an app with the exact identical configuration (including the image name) via an API request? Can you provide an example?

Best qdrop17

The UI in 2.1+ sets an annotation with the time as I described. I don’t think pause+resume should be redeploying either, so that’s probably a bug in rancher or k8s since that is changing a field outside of the of spec.

We tried to do so:

now=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

curl -H "Authorization: Bearer token-xxx" 'https://kubernetes.tld/v3/project/c-ckzdb:p-ttrrr/workloads/deployment:projects:lolo-lpn' -X PUT --data-binary '{"annotations":{"cattle.io/timestamp":"'"$now"'","workload.cattle.io/state":"{\"cmFuY2hlcjItbm9kZTAx\":\"c-ckzdb:m-b915e3ee4064\"}"},"baseType":"workload","containers":[{"allowPrivilegeEscalation":false,"image":"mongo:latest","imagePullPolicy":"Always","initContainer":false,"name":"lolo-lpn","privileged":false,"readOnly":false,"resources":{"type":"/v3/project/schemas/resourceRequirements","requests":{},"limits":{}},"restartCount":0,"runAsNonRoot":false,"stdin":true,"stdinOnce":false,"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","tty":true,"type":"container","volumeMounts":[{"mountPath":"/data/db","name":"lolo-mongo","readOnly":false,"type":"/v3/project/schemas/volumeMount"}],"capAdd":[],"capDrop":[]},{"allowPrivilegeEscalation":false,"environment":{"MONGODB_URI":"mongodb://lolo:lolo@lolo-lpn/lolobuttonadmin","NODE_ENV":"production"},"image":"registry.jls.digital/various/lolo-button-admin:dev","imagePullPolicy":"Always","initContainer":false,"name":"lolo-lpn-admin","privileged":false,"readOnly":false,"resources":{"type":"/v3/project/schemas/resourceRequirements"},"restartCount":0,"runAsNonRoot":false,"stdin":true,"stdinOnce":false,"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","tty":true,"type":"container"}],"created":"2018-09-05T08:15:12Z","createdTS":1536135312000,"creatorId":null,"deploymentConfig":{"maxSurge":0,"maxUnavailable":1,"minReadySeconds":0,"progressDeadlineSeconds":600,"revisionHistoryLimit":10,"strategy":"RollingUpdate"},"deploymentStatus":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2018-09-05T08:15:12Z","lastTransitionTimeTS":1536135312000,"lastUpdateTime":"2018-09-05T08:15:12Z","lastUpdateTimeTS":1536135312000,"message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2018-10-24T07:47:55Z","lastTransitionTimeTS":1540367275000,"lastUpdateTime":"2018-10-24T11:50:22Z","lastUpdateTimeTS":1540381822000,"message":"ReplicaSet \"lolo-lpn-86455cf677\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":39,"readyReplicas":1,"replicas":1,"type":"/v3/project/schemas/deploymentStatus","unavailableReplicas":0,"updatedReplicas":1},"dnsPolicy":"ClusterFirst","hostIPC":false,"hostNetwork":false,"hostPID":false,"id":"deployment:projects:lolo-lpn","imagePullSecrets":[{"name":"jls-registry","type":"/v3/project/schemas/localObjectReference"}],"labels":{"workload.user.cattle.io/workloadselector":"deployment-projects-lolo-lpn"},"name":"lolo-lpn","namespaceId":"projects","paused":false,"projectId":"c-ckzdb:p-mksps","publicEndpoints":[{"addresses":["104.248.35.182"],"allNodes":true,"hostname":"lolo-lpn.jls.digital","ingressId":"projects:projects-lb","nodeId":null,"podId":null,"port":443,"protocol":"HTTPS","serviceId":"projects:ingress-3f26d46d22347f72677bcd49089c25b9","type":"publicEndpoint"}],"restartPolicy":"Always","scale":1,"schedulerName":"default-scheduler","scheduling":{"node":{"nodeId":"c-ckzdb:m-b915e3ee4064"}},"selector":{"matchLabels":{"workload.user.cattle.io/workloadselector":"deployment-projects-lolo-lpn"},"type":"/v3/project/schemas/labelSelector"},"state":"active","terminationGracePeriodSeconds":30,"transitioning":"no","transitioningMessage":"","type":"deployment","uuid":"cfe0448a-b0e3-11e8-ba7e-3a420f52ae39","volumes":[{"hostPath":{"kind":"","path":"lolo-mongo"},"name":"lolo-mongo","type":"/v3/project/schemas/volume"}],"workloadAnnotations":{"deployment.kubernetes.io/revision":"30","deployment.kubernetes.io/timestamp":"'"$now"'","field.cattle.io/creatorId":"user-d7jrs"},"workloadLabels":{"workload.user.cattle.io/workloadselector":"deployment-projects-lolo-lpn"}}' --compressed

Unfortunately “annotations”:{“cattle.io/timestamp":“'”$now"'” did not resolve the issue. Maybe we misunderstood something here?

Interesting - how come it’s working for us then? :slight_smile:

I wrote a small lambda function that is triggered by Docker Hub webhook. It does a GET request to Rancher API and then just submits the very same data back - and voila, Rancher redeploys the services.

It’s been working for a few weeks now, it’s definitely an option

Alright, we nailed it:

First, we get the current pod configuration and adjust the timestamp fields with sed:

pod_upgrade_body=$(curl -s 'https://kubernetes.tld/v3/project/c-ckzdb:p-mksps/workloads/deployment:projects:pod' -X GET -H "Authorization: Bearer token-xxx" -H 'Accept-Encoding: gzip, deflate, br' -H 'Connection: keep-alive' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'content-type: application/json' -H 'accept: application/json' 2>&1 | sed  "s/\"cattle\.io\/timestamp\"\:\"[0-9T:Z-]*\"/\"cattle\.io\/timestamp\":\"$(date -u +"%Y-%m-%dT%H:%M:%SZ")\"/g")

After that, we PUT the modified body:

curl 'https://kubernetes.tld/v3/project/c-ckzdb:p-mksps/workloads/deployment:projects:pod' -X PUT -H "Authorization: Bearer token-xxx" -H 'Accept-Encoding: gzip, deflate, br' -H 'Connection: keep-alive' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'content-type: application/json' -H 'accept: application/json' --data-binary "$pod_upgrade_body" --compressed

Make sure to use the correct headers. Else it won’t work.

This redeploys the pod properly. Maybe it’s handy to include that in some documentation. Redeploying workloads using the API is a basic task that’s necessary in every CI/CD pipeline.

I tried to use your method, but this was what I got every single time after execute the curl command

{“baseType”:“error”,“code”:“InvalidBodyContent”,“message”:“Failed to parse body: invalid character ‘\x1f’ looking for beginning of value”,“status”:422,“type”:“error”}

Have you ever encountered this error?

I’d tried

{\"id\":\"deployment:$namespace:$workload\",\"annotations\":{\"cattle.io/timestamp\":\"$date\"}}

replacing the request body when using curl.it’s work.