Rancher Release - v2.0.16 - Addresses Rancher CVE-2019-13209

Release v2.0.16


Security fixes

This release addresses a security vulnerability found in Rancher. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva, and applies to Rancher versions v2.0.0-v2.0.15, v2.1.0-v2.1.10, v2.2.0-v2.2.4. The fix for this vulnerability is also available in Rancher v2.2.5, and v2.1.11. Rancher v1.6 is not affected. The vulnerability is known as a Cross-Site Websocket Hijacking attack. This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. You can view the official CVE here [CVE-2019-13209]

Rancher Provisioned Clusters Certificate Expiry

In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificate expires. Rancher v2.2.x provides UI support for certificate rotation. Starting Rancher v2.0.14, the rotation can be performed through the Rancher API, more details are here.

Network Policies

The default network selected when creating a Kubernetes cluster has been updated to canal with no network policies. With this change in default behavior, there are no network policy enforcements between projects, which means there is inter-project communication.

If you want to turn on network policies, which was the previous default behavior, then you would need to edit your cluster options to enable these network policies when deploying a cluster.


As of v2.0.7, we introduced a “System Project”, which has specific namespaces, i.e. cattle-namesystem. If these namespaces have been assigned to a project before upgrading, please move them out of the project before upgrading, so they can be appropriately moved into the system project.


NOTE - Image Name Changes: Please note that as of v2.0.0, our images will be rancher/rancher and rancher/rancher-agent. If you are using v1.6, please continue to use rancher/server and rancher/agent.

  • rancher/rancher:v2.0.16
  • rancher/rancher-agent:v2.0.16

Rancher Server Tags

Rancher server has 2 different tags. For each major release tag, we will provide documentation for the specific version.

  • rancher/rancher:latest tag will be our latest development builds. These builds will have been validated through our CI automation framework. These releases are not meant for deployment in production.
  • rancher/rancher:stable tag will be our latest stable release builds. This tag is the version that we recommend for production.

Please do not use releases with a rc{n} suffix. These rc builds are meant for the Rancher team to test builds.

Latest - v2.2.5 - rancher/rancher:latest

Stable - v2.2.5 - rancher/rancher:stable

rancher/rancher:v2.0.16 image is made available in server-charts/latest and server-chart/stable Rancher helm repos.

Upgrades and Rollbacks

Rancher supports both upgrade and rollback starting with v2.0.2. Please note the version you would like to upgrade or rollback to change the Rancher version.

Any upgrade after v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on update, which will cause any pods in workloads from previous versions to re-create.

Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default :latest tag.


Known Major Issues

  • Sometimes new Kubernetes version doesn’t get updated right away on the upgraded Kubernetes clusters; it gets fixed as soon as user application gets deployed on the node [15831]

Major Bug Fixes since v2.0.15

  • Fixed an issue when node driver machine provisioning could fail with “something went wrong running an SSH command” error [21012]

Rancher CLI Downloads