Rancher Release - v2.1.12 - Addresses Kubernetes CVE-2019-11247 and CVE-2019-11249 and Rancher CVE-2019-14435 and CVE-2019-14436

Release v2.1.12


  • This release comes with the latest Kubernetes version, i.e. v1.13.9, for Rancher launched Kubernetes clusters to address Kubernetes CVE-2019-11247 and CVE-2019-11249. Rancher recommends upgrading all Kubernetes clusters to this Kubernetes version.

  • This release addresses some security vulnerabilities found in Rancher:

    • CVE-2019-14436 - Project owner privilege escalation - This vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them admin access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. [#22026]
    • CVE-2019-14435 - This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. [#22025]

rancher/rancher:v2.1.12 helm image is made available in server-charts/latest and server-chart/stable Rancher helm repos.

Major Bug Fixes since v2.1.11

  • Fixed an issue where Rancher launched Kubernetes clusters would get stuck in updating state when enabling the --authorization-mode=Webhook flag on the kubelet.
  • Fixed an issue where the default AMI for EKS was no longer valid after EKS changed their default Kubernetes version to Kubenertes 1.13 [#21980]
  • Fixed an issue where the upstream Jenkins Kubernetes plug-ins couldn’t watch pods [#21945]
  • Fixed an issue where ca certs provided to Rancher were not able to be used during node provisioning [#21731]

Known Major Issues

  • Clusters created through Rancher can sometimes get stuck in provisioning [#15970] [#15969] [#15695]
  • The upgrade for Rancher node-agent daemonset can sometimes get stuck due to pod removal failure on a Kubernetes side [#16722]


NOTE - Image Name Changes: Please note that as of v2.0.0, our images will be rancher/rancher and rancher/rancher-agent. If you are using v1.6, please continue to use rancher/server and rancher/agent.


  • rancher/rancher:v2.1.12
  • rancher/rancher-agent:v2.1.12



Upgrades and Rollbacks

Rancher supports both upgrade and rollback starting with v2.0.2. Please note the version you would like to upgrade or rollback to change the Rancher version.

Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.

If you are currently using the RKE add-on install method, see Migrating from a RKE add-on install for details on how to move to using a helm chart.

Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on update, which will cause any pods in workloads from previous versions to re-create.

Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default :latest tag.

Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. [#13582]