Release v2.5.9
Changes in this Release
Security Fixes for Rancher Vulnerabilities
This release addresses security issues found in Rancher:
- Prevented privilege escalation via a malicious “Connection” header. #33588 Fixes CVE-2021-31999.
- Used apiGroups instead of “*” when creating Kubernetes RBAC resources for apps to avoid granting permissions to all app CRDs that exist in the cluster. Fixes CVE-2021-25318. #33590
- An unprivileged user can no longer use another user’s cloud credential for API requests to a cloud provider. Fixes CVE-2021-25320. #33589
For more details, see the security advisories page.
Additional Security Fixes
- Processes no longer panic upon receipt of malicious protobuf messages. Fixes CVE-2021-3121. #32944
- Updated minio-go, removed dependency on etcd, and updated rancherd RKE2 version to v1.20.7+rke2r2. #33050
Bug Fixes
- vSphere vCenter server entries are removed properly. #27306
- The size of v3.Catalog objects was reduced to avoid timeouts and CPU consumption spikes. #33256
- Services that are automatically deployed for workloads are removed when the workloads are removed. #33180
- Rancher startup is no longer blocked by CleanupDuplicateBindings. #32873
- Move non-error logging for etcd backups to debug to avoid flooding logs when a lot of clusters are managed. #32826
- Fixed an issue where nodes get stuck in Active when more than 5 nodes are being created at a time. #32681
- Errors are no longer seen when registering more than 100 clusters simultaneously. #32154
- Nodes are no longer stuck at “waiting to register” after Rancher is upgraded. #31999
Install/Upgrade Notes
- Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. #29213
- The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.17+ before installing Rancher 2.5.
- If using a proxy in front of an air gapped Rancher, you must pass additional parameters to
NO_PROXY
. #2725 Docs - The local cluster can no longer be turned off, which means all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting
hide_local_cluster
to true from the v3/settings API. #29325 Docs - For users upgrading from
>=v2.4.4
tov2.5.x
with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versionsv1.17.16-rancher1-1
,v1.17.17-rancher1-1
,v1.17.17-rancher2-1
,v1.18.14-rancher1-1
,v1.18.15-rancher1-1
,v1.18.16-rancher1-1
, andv1.18.17-rancher1-1
. Please refer to the workaround BEFORE upgrading tov2.5.x
. #32002 - For users upgrading from
<=v2.4.8 (<= RKE v1.1.6)
tov2.4.12+ (RKE v1.1.13+)
/v2.5.0+ (RKE v1.2.0+)
, please note that Edit and save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgradingkube-proxy
on all nodes because of a change inkube-proxy
binds. This only happens on the first edit and later edits shouldn’t affect the cluster.
#32216 - For installing or upgrading Rancher in an air gapped environment, please add the flag
--no-hooks
to thehelm template
command to skip rendering files for Helm’s hooks. #3226 - There is currently a setting allowing users to configure the length of refresh time in cron format:
eks-refresh-cron
. That setting is now deprecated and has been migrated to a standard seconds format in a new setting:eks-refresh
. If previously set, the migration will happen automatically. #31789 - When upgrading
<=v2.5.7
to>=v2.5.8
, you may notice that in app & marketplace there is a fleet-agent release stuck at uninstalling. This is caused by migrating fleet-agent release name. It is safe to delete fleet-agent release as it is no longer used and it should not delete the real fleet-agent deployment since it has been migrated. #362
Docker Install
- When starting the Rancher Docker container, the privileged flag must be used. See the docs for more info.
- When installing in an air gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command as shown in the K3s docs. If the registry has certs, then you will need to also supply those. #28969 - There are UI issues around startup time. #28800 #28798
Kubernetes 1.19 + firewalld
- For K8s 1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. #28840
Versions
Please refer to the README for latest and stable versions.
Please review our version documentation for more details on versioning and tagging conventions.
Other Notes
Deprecated Features
Feature | Justification |
---|---|
Cluster Manager - Rancher Monitoring | Monitoring in Cluster Manager UI has been replaced with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. |
Cluster Manager - Rancher Alerts and Notifiers | Alerting and notifiers functionality is now directly integrated with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer. |
Cluster Manager - Rancher Logging | Functionality replaced with a new logging solution using a new logging chart available in the Apps & Marketplace in Cluster Explorer. |
Cluster Manager - MultiCluster Apps | Deploying to multiple clusters is now recommended to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer. |
Cluster Manager - Kubernetes CIS 1.4 Scanning | Kubernetes CIS 1.5+ benchmark scanning is now replaced with a new scan tool deployed with a cis benchmarks chart available in the Apps & Marketplace in Cluster Explorer. |
Cluster Manager - Rancher Pipelines | Git-based deployment pipelines is now recommend to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer. |
Cluster Manager - Istio v1.5 | The Istio project has ended support for Istio 1.5 and has recommended all users upgrade. Newer Istio versions are now available as a chart in the Apps & Marketplace in Cluster Explorer. |
Cluster Manager - Provision Kubernetes v1.16 Clusters | We have ended support for Kubernetes v1.16. Cluster Manager no longer provisions new v1.16 clusters. If you already have a v1.16 cluster, it is unaffected. |
Known Major Issues
- Logging (Cluster Explorer): Windows nodeAgents are not deleted when performing helm upgrade after disabling Windows logging on a Windows cluster. #32325
- Rotating encryption keys with a custom encryption provider is not supported. #30539
- Istio 1.5 is not supported in air gapped environments. Please note that the Istio project has ended support for Istio 1.5. Please see above in Deprecated Features.
- In air gapped setups, the generated
rancher-images.txt
that is used to mirror images on private registries does not contain the images required to run Monitoring in Cluster Manager v0.1.x. Clusters running k8s 1.15 and below will need to upgrade their k8s versions and leverage Monitoring in Cluster Manager v0.2.x or upgrade to Monitoring in Cluster Explorer. - Importing a Kubernetes v1.21 cluster might not work properly. We are planning to add support for Kubernetes v1.21 in the future.
- Deploying Monitoring V2 on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. #32535
- Monitoring V2 fails to scrape ingress-nginx pods on any nodes except for the one Prometheus is deployed on if the security group used by worker nodes blocks incoming requests to port 10254. The workaround for this issue is to open up port 10254 on all hosts. #32563
Cluster Explorer Feature Caveats and Upgrades
-
General
- Not all new features are currently installable on a hardened cluster.
- New features are expected to be deployed using the Helm 3 CLI and not with the Rancher CLI.
-
Rancher Backup
- When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location, it must continue to use the same URL.
-
Monitoring
- Monitoring sometimes errors on installation because it can’t identify CRDs. #29171
-
Istio
- When accessing tracing information for a service in the Kiali dashboard bundled with v1.9.3 and v1.8.5, attempting to change the display options may result in a persistent error for that service’s tracing information. We recommend using the Jaeger dashboard if you would like different details for a particular services tracing until this issue is resolved. The resolution for this issue can be found in #32330
- Be aware that when upgrading from Istio 1.7.4 or earlier to any later version there may be connectivity issues. Upgrade notes #31811
- Starting in v1.8.x, DNS is supported natively. This means the additional addon component
istioCoreDNS
is deprecated in v1.8.x and is not supported in v1.9x. If you are upgrading from v1.8.x to v1.9.x and you are using theistioCoreDNS
addon, it is recommended that you disable it and switch to the natively supported DNS prior to upgrade. If you upgrade without disabling it, you will need to manually clean up your installation as it will not get removed automatically. #31761 #31265
Cluster Manager Feature Caveats and Upgrades
- GKE
-
EKS & GKE
- When creating EKS and GKE clusters in Terraform, string fields cannot be set to empty. #32440
Versions within Rancher
Images
- rancher/rancher:v2.5.9
- rancher/rancher-agent:v2.5.9
Tools
Kubernetes Versions
- 1.20.8 (Default)
- 1.19.12
- 1.18.20
- 1.17.17
Upgrades and Rollbacks
Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager. Docs
Existing GKE clusters and imported clusters will continue to operate as-is. Only new creations and registered clusters will use the new full lifecycle management.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.