Release v2.4.5
Addressing CVEs
- Rancher’s Helm now uses v2.16.8 to address CVE-2020-7919 [PR-27568]
- Rancher monitoring’s Grafana has been updated to address CVE-2020-13379 [#27418]
- Updated Istio to 1.4.9 to address ISTIO-SECURITY-2020-005 [#27064]
Important
-
Please review the v2.4.0 release notes for important updates/ breaking changes.
-
Kubernetes 1.18 is now the default version [#25117] - Whenever upgrading to any Kubernetes version, please review the Kubernetes release notes for any breaking changes.
-
Users using a single Docker container install - Etcd in the Docker container has been upgraded from 3.3 to 3.4, therefore you must take a backup before upgrading in order to be able to roll back to a v2.3.x release. You will not be able to rollback without this backup.
-
Users using node pools with RHEL/CentOS nodes [#18065]: The default storage driver for RHEL/CentOS nodes has been updated to
overlay2
. If your node template does not specify a storage driver, any new node will be provisioned using the new updated default (overlay
) instead of the old default (devicemapper
). If you need to keep usingdevicemapper
as your storage driver option, edit your node template to explicitly set the storage driver as `devicemapper. -
Users running Windows clusters [#25582] - Windows has launched a security patch as of Feb 11. Before upgrading, please update your nodes to include this security patch, otherwise your upgrade will fail until the patch is applied.
-
Rancher launched clusters require additional 500MB space - By default, Rancher launched clusters have enabled audit logging on the cluster.
-
Rancher launched Kubernetes clusters behavior of upgrades have changed [#23897] - Please refer to the zero downtime upgrades feature to read more about it.
Versions
The following versions are now latest and stable:
Type | Rancher Version | Docker Tag | Helm Repo | Helm Chart Version |
---|---|---|---|---|
Latest | v2.4.5 | rancher/rancher:latest |
server-charts/latest | v2.4.5 |
Stable | v2.4.5 | rancher/rancher:stable |
server-charts/stable | v2.4.5 |
Please review our version documentation for more details on versioning and tagging conventions.
Major Bugs Fixed Since v2.4.4
- Fixed the “backed up reader” issue where downstream cluster cannot communicate with Rancher [#26624]
- Fixed an issue where Azure VM nodes couldn’t be added to pool [#27447]
- Fixed an issue in helm catalogs where re-enabling changed the helm version from v3 to v2 [#27282]
- Pinned docker-ce-cli version to ensure usage of correct API version [#27161]
- Fixed an issue which disabled app upgrades to previous versions [#27150]
- Fixed an issue which stopped aggressive drain on nodes [#27111]
- Fixed runtime error in ingress controller [#27060]
- Fixed an issue which stopped cluster provisioning when using private registry with authentication [#27029]
- Fixed an issue to allow adding registry credentials [#26981]
- Fixed intermittent bug around updated AKS cluster’s Kubernetes version [#26907]
- Fixed an issue stopping cluster provisioning on RHEL 7.7 with selinux enabled on Docker daemon [#26871]
- Fixed an issue to allow editing description of cluster created with RKE template [#26754]
- Fixed an issue with using cert-managers default Issuer [#26505]
- Fixed an issue where high number of downstream objects caused an out of memory error [#26166]
- Fixed an issue stopping FluentD from exporting logging [#25819]
- Fixed an issue where a notifier wasn’t respecting TLS boolean [#25321]
- Fixed an issue where each Azure VM created a new security group [#24449]
- Allow Docker install script on SUSE [#25071]
- Fixed an issue blocking windows pod from accessing the api server in VXLAN mode [#20968]
- Fixed CrashLoopBackOff when using Prometheus with consistent storage [#17256]
- Fixed an issue stopping export of logs to AWS ElasticSearch [#21744]
- Fixed an issue where Terraform provider couldn’t recognize certain NodeTemplates [#26160]
- Fixed several UI issues
Other notes
Air Gap Installations and Upgrades
In v2.4.0, an air gap installation no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.
Other Upgrades
- Added AWS region af-south-1 [#27131]
- Aliyun cluster driver has updated k8s versions and options [#27277]
- Updated the monitoring chart and its images [#26969]
- Update Azure Node Driver to support latest client, pre-baked images, and SLES docker installs [#25342]
- Update Canal to use Flannel 0.12.0 [#27532]
- Validated Ubuntu 20.04 [#26783]
Known Major Issues
- When using monitoring with persistent storage for Grafana enabled, upgrading monitoring causes the pod to fail to start. Workaround steps are provided in the issue. [#27450]
- When using monitoring, upgrading Kubernetes versions removes the “API Server Request Rate” metric [#27267]
- When a new chart version is added to a helm 3 catalog, the upgrade process can default to helm 2, causing an api error. Workaround in issue. [27252]
Versions
Images
- rancher/rancher:v2.4.5
- rancher/rancher-agent:v2.4.5
Tools
Kubernetes
Upgrades and Rollbacks
Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.