Rancher Release - v2.4.8

Release v2.4.8

Only addresses these issues introduced after v2.4.6. Release notes remain the same as v2.4.6 except for the following changes:

  1. Fixed a UI issue where permissions to list KMS Keys were required to modify any AWS Node Templates [#28724]
  2. Fixed an issue with the k8s go-client which was not compatible with older k8s versions and could throw a Timeout: Too large resource version message in the logs. Note: This message has no known direct impact on clusters. [#28623]
  3. Fixed the default value for a new setting (auth-token-max-ttl-minutes)
    a) Fixed an issue where new API tokens created in v2.4.6 would expire in 24 hours by default because of a new setting auth-token-max-ttl-minutes introduced in v2.4.6. The setting would now default to 0 allowing tokens to never expire, similar to v2.4.5. https://github.com/rancher/rancher/issues/28668
    b) Fixed an UI issue where API tokens would have wrong expiry timestamp if they were created with non zero TTL and new setting auth-token-max-ttl-minutes introduced in v2.4.6 was set to 0 (never expire). https://github.com/rancher/rancher/issues/28678


  • Kubernetes 1.18 is now the default version [#25117] - Kubernetes 1.18 is now the default version. Whenever upgrading to any Kubernetes version, please review the Kubernetes release notes for any breaking changes.
  • Users using a single Docker container install - Etcd in the Docker container has been upgraded from 3.3 to 3.4, therefore you must take a backup before upgrading in order to be able to roll back to a v2.3.x release. You will not be able to rollback without this backup.

  • Users using node pools with RHEL/CentOS nodes [#18065]: The default storage driver for RHEL/CentOS nodes has been updated to overlay2. If your node template does not specify a storage driver, any new node will be provisioned using the new updated default (overlay) instead of the old default (devicemapper). If you need to keep using devicemapper as your storage driver option, edit your node template to explicitly set the storage driver as `devicemapper.

  • Users running Windows clusters [#25582] - Windows has launched a security patch as of Feb 11. Before upgrading, please update your nodes to include this security patch, otherwise your upgrade will fail until the patch is applied.

  • Rancher launched clusters require additional 500MB space - By default, Rancher launched clusters have enabled audit logging on the cluster.

  • Rancher launched Kubernetes clusters behavior of upgrades have changed [#23897] - Please refer to the zero downtime upgrades feature to read more about it.


The following versions are now latest and stable:

Type Rancher Version Docker Tag Helm Repo Helm Chart Version
Latest v2.4.8 rancher/rancher:latest server-charts/latest v2.4.8
Stable v2.4.8 rancher/rancher:stable server-charts/stable v2.4.8

Please review our version documentation for more details on versioning and tagging conventions.


  • Kubeconfig API Tokens now support TTL [#28378]

Updated Kubernetes Versions

Note: These were made available earlier, but are officially packaged into this release.

  • Updated to use v1.16.13-rancher1-2, v1.17.9-rancher1-2 and v1.18.6-rancher1-2
  • Kubernetes related CVEs [#27950]
  1. CVE-2020-8557 - Node disk DOS by writing to container /etc/hosts
  2. CVE-2020-8558 - Node setting allows for neighboring hosts to bypass localhost boundary
  3. CVE-2020-8559 - Privilege escalation from compromised node to cluster
  • Updated Canal to use Flannel v0.12.0 [#27577]
  • Increase memory limit for Minio [#28025]

Major Bugs Fixed Since v2.4.5

  • Windows clusters now support prefix paths and Windows only args/env/binds via new win_ configs [#28143 #25108]
  • Deleting RKE Templates now more reliable [#26861]
  • Fixed ETCD Snapshot Restores from some S3 Bucket Providers like NetApp StorageGRID [#27608]

Minor Bugs Fixed Since v2.4.5

  • Fixed Auth Error when Installing Helm Charts from a local directory [#23832]
  • Timeout: Too large resource version listing error fixed on k8s v1.18 [#28477]
  • Kiali Traffic Graphs now working in 1.5.x [#28109]
  • Rancher Kubernetes Auth Proxy now utilizes forward proxy [#25488]
  • Fixed a bug with cli and --format flag [#27661]
  • Fixed a bug wit cli and --values flag [#27346]
  • Fixed a bug with editting multi-cluster apps [#27416]
  • Fixed a bug with Node Draining [#23333]
  • Fixed a bug where nodepools couldnt be scaled by users who didn’t create it [#27031]
  • Fixed a bug by encoding Azure AD Tokens [#27774]
  • Extended timeouts for logs being flooded with context deadline exceeded [#27736]
  • RKE Snapshots now store certs for backup restores #1336
  • Config Maps and Secrets now accept . in their names [#25955]
  • Fixed UI Issues [#27849 #27769 #27705 #27439 #27416 #27333 #27021 #26865 #26827 #26469 #15037 #4047]

Other notes

  • Rancher Machine created nodes now default to overlay2 file system [#27414]
  • Default CIS Scan now 1.5 [#27446]
  • EC2 Node Templates now support KMS Encryption Key [#27965]
  • EC2 Node Templates now support MetadataOptions [#25078]
  • Notifier added for Microsoft Teams [#15802]
  • Cluster API lister now uses auth cache for speed improvement [#27192]

Air Gap Installations and Upgrades

In v2.4.0, an air gap installation no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.

Other Upgrades

Known Major Issues

  • When using monitoring with persistent storage for Grafana enabled, upgrading monitoring causes the pod to fail to start. Workaround steps are provided in the issue. [#27450]
  • When using monitoring, upgrading Kubernetes versions removes the “API Server Request Rate” metric [#27267]
  • When a new chart version is added to a helm 3 catalog, the upgrade process can default to helm 2, causing an api error. Workaround in issue. [27252]



  • rancher/rancher:v2.4.8
  • rancher/rancher-agent:v2.4.8



Upgrades and Rollbacks

Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.

Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.

Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.

Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.