Release v2.4.9
Important
- Please review the v2.4.0 release notes for important updates/ breaking changes.
- Kubernetes 1.18 is now the default version [#25117] - Kubernetes 1.18 is now the default version. Whenever upgrading to any Kubernetes version, please review the Kubernetes release notes for any breaking changes.
-
Users using a single Docker container install - Etcd in the Docker container has been upgraded from 3.3 to 3.4, therefore you must take a backup before upgrading in order to be able to roll back to a v2.3.x release. You will not be able to rollback without this backup.
-
Users using node pools with RHEL/CentOS nodes [#18065]: The default storage driver for RHEL/CentOS nodes has been updated to
overlay2
. If your node template does not specify a storage driver, any new node will be provisioned using the new updated default (overlay
) instead of the old default (devicemapper
). If you need to keep usingdevicemapper
as your storage driver option, edit your node template to explicitly set the storage driver as `devicemapper. -
Users running Windows clusters [#25582] - Windows has launched a security patch as of Feb 11. Before upgrading, please update your nodes to include this security patch, otherwise your upgrade will fail until the patch is applied.
-
Rancher launched clusters require additional 500MB space - By default, Rancher launched clusters have enabled audit logging on the cluster.
-
Rancher launched Kubernetes clusters behavior of upgrades have changed [#23897] - Please refer to the zero downtime upgrades feature to read more about it.
Versions
The following versions are now latest and stable:
Type | Rancher Version | Docker Tag | Helm Repo | Helm Chart Version |
---|---|---|---|---|
Latest | v2.4.9 | rancher/rancher:latest |
server-charts/latest | v2.4.9 |
Stable | v2.4.9 | rancher/rancher:stable |
server-charts/stable | v2.4.9 |
Please review our version documentation for more details on versioning and tagging conventions.
Enhancements
- Introduced new Kubernetes versions with new Ingress image [nginx-0.35.0-rancher2] to fix an issue where load balancer IPs were not updated on powering off nodes. [#28230], [#13862]
- Introduced options to configure different network modes for nginx ingress controller. [rancher/rke#1876], [#28329]
- Introduced a flag to change the number of Helm Version Histories kept for Charts deployed by Rancher. [#28765]
- Introduced new monitoring chart (
v0.1.4
) with updated Alpine Linux images in prometheus-auth [#29290] and support for custom taints. [#27253] - Introduced support for both basic and standard load balancers when creating AKS cluster. [#23715]
- Validated Support for following OS:
OS Issue RHEL 7.9 https://github.com/rancher/rancher/issues/29736 Oracle Linux 7.9 https://github.com/rancher/rancher/issues/29737 CentOS 7.8 https://github.com/rancher/rancher/issues/29738 SLES 15 SP2 https://github.com/rancher/rancher/issues/29739
Major Bugs Fixed Since v2.4.8
- vSphere cluster with cloud provider now drains nodes before deleting them. [#18221], [#24690]
- Fixed an issue where deleting the etcd leader would result into an unhealthy cluster [#24547]
- Fixed an issue in CLI where user provided values weren’t applied correctly [#27841], [#28000]
- Fixed an issue where ingress’s Load balancer IPs were not populated correctly if Project network isolation was enabled on cluster. [#26677]
- Fixed an issue in azureAD that caused memory leak to fetch users groups. [#29055]
- Fixed an issue in app installation so app doesn’t continue retrying and fill disk space if helm install fails. [#27146]
Minor Bugs Fixed Since v2.4.8
- s3 etcd backup validation is now validated on downstream user cluster instead of rancher’s management cluster [#28739]
- Fixed an issue in Vsphere where node provisioning failed because of datastores with empty characters [#27699]
- Fixed an issue in draining node from API [#28998]
- Fixed an issue in Rancher Logging where configuration would get lost on Edit as File from UI. [#26559]
- Fixed an issue where read-only role couldn’t be assigned to a member during cluster create. [#23061]
- Fixed an issue where child resource groups couldn’t be selected in vSphere node template. [#24507]
- Fixed an issue with no kind Issuer error message with updated cert-manager
v1.0.0
[#29056] - Fixed Docker installation on Oracle Linux 7.8 [#27691]
- Fixed an issue in UI where AKS network policy wasn’t supported due to case sensitivity [#28880]
Other notes
- OKE and OCI Node Drivers now populate values dynamically from OCI API. [#29621], [#27051]
- AWS Driver now detects root device name from the AMI. [#29568]
- New E2 Compute shapes are now added to OKE driver list. #29382]
- Added support for vSphere/ESXi 7.0 [#29519], [#27732]
- Added Kubernetes 1.18 for EKS clusters #29508]
Air Gap Installations and Upgrades
In v2.4.0, an air gap installation no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.
Other Upgrades
Known Major Issues
- When using monitoring with persistent storage for Grafana enabled, upgrading monitoring causes the pod to fail to start. Workaround steps are provided in the issue. [#27450]
- When using monitoring, upgrading Kubernetes versions removes the “API Server Request Rate” metric [#27267]
- When a new chart version is added to a helm 3 catalog, the upgrade process can default to helm 2, causing an api error. Workaround in issue. [27252]
- When Project Network Isolation is turned on, upgrading from a previous Rancher version to 2.4.9 will cause an increase in CPU / Logging. Workaround is turn off PNI. #30052. Fix is tracked in #30045.
Versions
Images
- rancher/rancher:v2.4.9
- rancher/rancher-agent:v2.4.9
Tools
Kubernetes
Upgrades and Rollbacks
Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.