Please install v2.5.7 instead of v2.5.6
v2.5.7 contains one fix that is an issue in this release:
When using a private registry without authentication, provisioning or updating RKE clusters created with nodes from an infrastructure provider will fail. #31600
This release addresses a security vulnerability found in Rancher:
- Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. #29213
- Rancher HA cluster should be upgraded to Kubernetes 1.17+ before installing Rancher 2.5.
- If using a proxy in front of an air-gapped Rancher, you must pass additional parameters to
NO_PROXY. #2725 Docs
- The local cluster can no longer be turned off, which means all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting
hide_local_clusterto true from the v3/settings API. #29325 Docs
- When starting the Rancher Docker container, the privileged flag must be used. See the docs for more info
- When installing in an air gap environment, you must supply a custom registries.yaml file to the Docker run command as shown in the k3s docs. If the registry has certs, then you will need to also supply those. #28969
- There are UI issues around startup time #28800, #28798
Kubernetes 1.19 + firewalld
- For K8s 1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. #28840
Please refer to the README for latest and stable versions.
Please review our version documentation for more details on versioning and tagging conventions.
- Added support for Kubernetes v1.20
- Added ability to set environment variables to the agent in order to support downstream clusters behind a proxy #31370 Docs
- Added the ingress.enabled rancher Helm flag. When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. Docs
- Node Pool Enhancements
vSphere out of tree cloud provider - Added ability to configure the vSphere external cloud provider through the Apps and Marketplace in the Cluster Explorer. By using the vSphere Cloud Provider Interface (CPI) and Cloud Storage Interface (CSI) charts, the vsphere out-of-tree provider can be configured. Note: Your cluster must have the cloud provider set as
externalin order for the cluster to allow out-of-tree provider configuration. For those already using the vSphere in-tree provider, migrations docs are available. #20131 #23357 Docs
- Previously any new charts for Fleet would automatically be deployed into any existing Rancher install, as of v2.5.6, we’ve added the ability to put a minimum version for Fleet charts so they wouldn’t automatically be deployed #30934
- Linode Kubernetes Engine (LKE) is now available as a cluster driver and new Kubernetes clusters can be spun up directly with LKE. The cluster driver is inactive by default and will need to be activated to appear as an option.
Cluster Explorer Features
- Added support for being able to configure resource settings #31099
Cluster Manager Tools
Major Bug Fixes
- Fixed an issue where the Rancher server chart couldn’t be installed onto a Kubernetes cluster without an ingress #30535
- Fixed an issue where etcd would have increased traffic and memory usage after upgrading #30168
- Fixed an issue where public Helm chart repository were not working on clusters behind a proxy #29961
- Fixed an issue where telemetry client had a socket leak and cause upgrade issues or general k8s issues #28360 #27870
- Fixed an issue where vSphere vCenter server entries from the in-tree cloud provider configuration would not be removed #30606
- Fixed an issue where the cluster private registry was not working with the rancher-agent image for clusters provided by node drivers #30605
- Fixed an issue where EC2 node provisioning failed when using a SLES15 AMI #30717
- Fixed an issue where nodes wouldn’t drain before deleting when scaling down if the node had pods with emptyDir volumes #31455
- Fixed an issue where RKE clusters would get stuck when there were
Cordonedworker nodes and starting to remove master nodes #30049
- Fixed an issue where clusters couldn’t be imported with the kubernets-python-client due to additional
---at the end of the the import file#31252
- Fixed an issue where imported clusters would return 404s from the agent #15172
- Fixed an issue where windows nodes failed to create RKE log directory if
prefixPathwas not set #30966
- Fixed an issue where monitoring in Cluster Explorer wasn’t working on Windows server-core versions #27911
- Fixed an issue when configuring OpenLDAP with StartTLS #30930
- Fixed an issue with Fleet where GitRepos and clusters would get stuck in a
- Fixed an issue with Fleet where adding a Git repo with uppercase letters in the path would fail #30792
- Fixed an issue where the logging in the Cluster Manager UI was previously failing on new AKS clusters #30425
- Fixed an issue where the logging in Cluster Explorer was not working with non-standard Docker root directory #30329
- Fixed an issue where the display name was incorrect when nodes in a single cluster have FQDN hostnames under multiple different subdomains. #27873
- Fixed an issue where ClusterRoleBinding’s apiVersion would log deprecation warning for any k8s 1.19 clusters #30043
- Fixed a list of UI issues within Cluster Manager and within Cluster Explorer
The primary UI in Rancher since v2.0 is now referred to as Cluster Manager in the UI. The new Cluster Explorer dashboard, experimentally released in Rancher 2.4, has graduated to GA status. There are new features only available in the new Cluster Explorer dashboard. There are some new features in the new UI with similar functionality as existing features in Cluster Manager, but differences in implementation may cause differences details may cause variation.
Duplicated Features in Cluster Manager and Cluster Explorer
- Only 1 version of the feature may be installed at any given time due to potentially conflicting CRDs.
- Each feature should only be managed by the UI that it was deployed from.
- If you have installed the feature in Cluster Manager, you must uninstall in Cluster Manager before attempting to install the new version in Cluster Explorer dashboard.
|Cluster Manager - Rancher Monitoring||Monitoring in Cluster Manager UI has been replaced with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer.|
|Cluster Manager - Rancher Alerts and Notifiers||Alerting and notifiers functionality is now directly integrated with a new monitoring chart available in the Apps & Marketplace in Cluster Explorer.|
|Cluster Manager - Rancher Logging||Functionality replaced with a new logging solution using a new logging chart available in the Apps & Marketplace in Cluster Explorer.|
|Cluster Manager - MultiCluster Apps||Deploying to multiple clusters is now recommended to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer.|
|Cluster Manager - Kubernetes CIS 1.4 Scanning||Kubernetes CIS 1.5+ benchmark scanning is now replaced with a new scan tool deployed with a cis benchmarks chart available in the Apps & Marketplace in Cluster Explorer.|
|Cluster Manager - Rancher Pipelines||Git-based deployment pipelines is now recommend to be handled with Rancher Continuous Delivery powered by Fleet available in Cluster Explorer.|
|Cluster Manager - Istio v1.5||The Istio project has ended support for Istio 1.5 and has recommended all users upgrade. Newer Istio versions are now available as a chart in the Apps & Marketplace in Cluster Explorer.|
Known Major Issues
- Kubernetes v1.20 has an issue with the vSphere in-tree cloud provider. Rancher supports the out-of-tree vSphere cloud provider starting as of v2.5.6, which is when k8s 1.20 support was introduced. #31172
- Rotating encryption keys with a custom encryption provider is not supported. #30539
- Logging in the cluster explorer may not capture all kubelet logs for cloud providers. #30383
- Istio 1.5.10 is not supported in air gapped environments.
- In air-gapped setups, the generated
rancher-images.txtthat is used to mirror images on private registries does not contain the images required to run the Monitoring in Cluster Manager v0.1.x. Clusters running k8s 1.15 and below will need to upgrade their k8s versions and leverage Monitoring in Cluster Manager v0.2.x or upgrade to Monitoring in Cluster Explorer.
Cluster Explorer Feature Caveats and Upgrades
- When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location, it must continue to use the same URL.
- Rancher Continuous Delivery (Fleet) is not handled during backup. Backup#69
Versions within Rancher
- 1.20.4 (default)
Upgrades and Rollbacks
Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.