Users upgrading from v2.2.13+ or v2.3.8+: The Kubernetes versions used in those versions will not work in this release due to a bug in kontainer-driver-metadata. You must upgrade to v2.4.4+. [#26752]
Please review the v2.4.0 release notes for important updates/ breaking changes.
Users using a single Docker container install - Etcd in the Docker container has been upgraded from 3.3 to 3.4, therefore you must take a backup before upgrading in order to be able to roll back to a v2.3.x release. You will not be able to rollback without this backup.
Users using node pools with RHEL/CentOS nodes [#18065]: The default storage driver for RHEL/CentOS nodes has been updated to
overlay2. If your node template does not specify a storage driver, any new node will be provisioned using the new updated default (
overlay) instead of the old default (
devicemapper). If you need to keep using
devicemapperas your storage driver option, edit your node template to explicitly set the storage driver as `devicemapper.
Users running Windows clusters [#25582] - Windows has launched a security patch as of Feb 11. Before upgrading, please update your nodes to include this security patch, otherwise your upgrade will fail until the patch is applied.
Rancher launched clusters require additional 500MB space - By default, Rancher launched clusters have enabled audit logging on the cluster.
Rancher launched Kubernetes clusters behavior of upgrades have changed [#23897] - Please refer to the zero downtime upgrades feature to read more about it.
The following versions are now latest and stable:
|Type||Rancher Version||Docker Tag||Helm Repo||Helm Chart Version|
Please review our version documentation for more details on versioning and tagging conventions.
Features and Enhancements
- Introduced Support for Custom Image Name or ID on Azure Node Templates [#23010]
Hosted Kubernetes Provider Updates
- Introduced regions cn-northwest-1 and cn-north-1 for EKS clusters [#25613]
- Added support to show AKS/GKE/EKS region information in cluster dashboard [#25656]
Email Notifier Enhancements
- Added body text to email notifier resolved notifications [#24156]
- Added support for customizable Header/Footer/‘Classification Banners’ to Rancher [#25694]
Feature Flags for Experimental Features
We have the ability to turn on and off specific experimental components inside Rancher. You can manage feature flags through our UI. Certain feature flags require a Rancher restart. Alternatively, you can refer to our docs on how to turn on the features when starting Rancher.
|Feature Flag||Feature Flag Name||Default Value||Available as of||Rancher Restart Required?|
|Next Gen UI||
|UI for unsupported storage drivers||
Major Bugs Fixed Since v2.4.2
- Fixed an issue where cattle-cluster-agent memory was growing in Rancher v2.4.2 k3s cluster [#26633] [#26577]
- Fixed an issue where a memory leak was causing Rancher server to OOM [#21361]
- Fixed an issue where preferences were other user’s name and username instead of logged in user [#14420]
- Fixed an issue where logging into Rancher would be extremely slow for LDAP when you have thousands of groups [#26061]
- Fixed an issue where CLI wouldn’t correctly parse boolean values when using the
--valuesparameter [#21083] [#]
- Fixed an issue where nodes removed from UI were not able to rejoin the cluster because they weren’t properly cleaned up [#23254]
- Fixed an issue where you couldn’t create Azure node templates using the AzureUSGovernmentCloud endpoint [#23350]
- Fixed an issue where you couldn’t edit an EKS cluster if you had updated the version outside of Rancher [#24171 ]
- Fixed an issue where editing applications launched from project catalog were requiring projectID [#24371]
- Fixed an issue where EKS clusters were not using using rotated AWS credentials when trying to edit the cluster [#25835]
- Fixed an issue where self signed TLS certificates were failing to generate after v2.4.2 upgrade [#26457]
- Fixed an issue where you couldn’t directly connect to clusters upgraded from v2.4.0 but only through Rancher [#26555]
- Fixed an issue where single node k3s clusters kept showing
Upgradingeven after the cluster is already upgraded [#26286]
- Fixed an issue where during a downstream user cluster upgrade, next node was drained before the current node being upgraded was completed [#26401]
- When persistent storage is enabled for monitoring, the latest monitoring 0.1.0 won’t work 
Air Gap Installations and Upgrades
In v2.4.0, an air gap installation no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.
Known Major Issues
- During a downstream user cluster upgrade, if you have any PodDisruptionBudgets in your cluster, the drain process could get stalled [#26400]
- When nodes are powered off in the cluster, the metrics server pod and coreDNS pod may not get evicted from the node and needs to be manually deleted until it’s re-scheduled to an active node [#26190, #26191]
- Logging doesn’t work on imported k3s clusters [#24157]
Upgrades and Rollbacks
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.