Release v2.10.0
Important: Review the Install/Upgrade Notes before upgrading to any Rancher version.
Rancher v2.10.0 is the latest minor release of Rancher. This is a Community version release that introduces new features, enhancements, and various updates.
Highlights
Rancher General
Features and Enhancements
- Rancher now supports Kubernetes v1.31. See #46197 for information on Rancher support for Kubernetes v1.31. Additionally, the upstream Kubernetes changelogs for v1.31 can be viewed for a full list of changes.
Behavior Changes
- Kubernetes v1.27 is no longer supported. Before you upgrade to Rancher v2.10.0, make sure that all clusters are running Kubernetes v1.28 or later. See #47591.
- The new annotation
field.cattle.io/creator-principal-name
was introduced in addition to the existingfield.cattle.io/creatorId
that allows specifying the creator’s principal name when creating a cluster or a project. If this annotation is used theuserPrincipalName
field of the correspondingClusterRoleTemplateBinding
orProjectRoleTemplateBinding
will be set to the specified principal. The principal should belong to the creator’s user, which is enforced by the webhook. See #46828. - When searching for group principals with a SAML authentication provider (with LDAP turned off) Rancher now returns a principal of correct type (group) with the name matching the search term. When searching principals with a SAML provider (with LDAP turned off) without specifying the desired type (as in Add cluster/project member) Rancher now returns both user and group principals with the name matching the search term. See #44441.
- Rancher now captures the last used time for Tokens and stores it in the
lastUsedAt
field. If the Authorized Cluster Endpoint is enabled and used on a downstream cluster Rancher captures the last used time in theClusterAuthToken
object and makes the best effort to sync it back to the corresponding Token in the upstream. See #45732. - Rancher deploys the System Upgrade Controller (SUC) to facilitate Kubernetes upgrades for imported RKE2/K3s clusters. Starting with this version, the mechanism used to deploy this component in downstream clusters has transitioned from legacy V1 apps to fully supported V2 apps, providing a seamless upgrade process for Rancher. For more details, please see this issue comment.
Rancher App (Global UI)
Behavior Changes
-
This release includes a major upgrade to the Dashboard (Cluster Explorer) Vue framework from Vue 2 to Vue 3. Please view our documentation on updating existing UI extensions to be compliant with the Rancher v2.10 UI framework in the v2.10.0 UI extension changelog. If experiencing a page that fails to load please file an issue via the Dashboard repository and choose the “Bug report” option for us to further investigate. See #7653.
-
The performance of the Clusters lists in the Home page and the Side Menu has greatly improved when there are hundreds of clusters. See #11995 and #11993.
-
The previous Dashboard Ember UI (Cluster Manager) will no longer be directly accessible. The relative pages that rely on the previous UI will continue to be embedded in the new Vue UI (Cluster Explorer). See #11371.
-
Updated the data directory configuration by replacing the checkbox option with 3 user input options below:
Use default data directory configuration
Use a common base directory for data directory configuration (sub-directories will be used for the system-agent, provisioning and distro paths)
→ This option displays a text input where users can enter a base directory for all 3 subdirectories which Rancher programmatically appends to the correct subdirectories.Use custom data directories
→ This option displays 3 text inputs, one for each subdirectory type where users can input each path individually.
See #11560.
Bug Fixes
- Fixed an issue where when creating a GKE cluster in the Rancher UI you would see provisioning failures as the
clusterIpv4CidrBlock
andclusterSecondaryRangeName
fields conflict. See #8749.
K3s Provisioning
Known Issues
- An issue was discovered where upgrading the k8s version of the downstream node driver and custom K3s clusters may result in an etcd node reporting
NodePressure
, and eventually therancher-system-agent
reporting failures to execute plans. If this issue is encountered, it can be solved by performing asystemctl restart k3s.service
on the affected etcd-only nodes. See #48096 and this issue comment for more information.
RKE Provisioning
Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.
Please note, End-of-Life (EOL) for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.
RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.
Major Bug Fixes
- Fixed a permission issue which led to failures when attempting to provision an RKE cluster on vSphere. See #47938.
Rancher CLI
Major Bug Fixes
- When using the Rancher CLI, the prompt to choose an auth provider will now always have the local provider in the first position. See #46128.
Behavior Changes
The deprecated subcommand globaldns
was removed from the Rancher CLI.
Authentication
Features and Enhancements
- Added support for SAML
logout-all
to the SAML-based external auth providers (EAP). Alogout-all
logs a user not only out of Rancher, but also out of the associated session in the EAP. This logs the user out of other applications attached to the same session as well. When logging into Rancher again a full authentication has to be performed to reestablish a new session in the EAP. This is in contrast to a regular logout where a re-login re-uses the session and bypasses the need for actual re-authentication.
Extended the EAP configuration form enabling the configuring admin to choose whetherlogout-all
will be available to users or not, and if yes, if users may even be forced to always uselogout-all
instead of having a choice between it and regular logout.
See #38494. - There is an option now to force a password reset on first logon when setting up a
rancher2_user
. See #45736.
Continuous Delivery (Fleet)
- Fleet v0.11.0 is also releasing alongside Rancher v2.10 and improves several log and status messages. It reduces the amount of reconciles done by the controllers for resource changes. It adds k8s events for the GitRepo resource, that users can subscribe to and are documented.
Known Issues
-
There are a few known issues which were not fixed in time which affect Rancher:
-
Target customization for namespace labels and annotations cannot modify/remove labels when updating. See #3064.
-
In version 0.10, GitRepo resources provided a comprehensive list of all deployed resources across all clusters in their status. However, in version 0.11, this list has been modified to report resources only once until the feature is integrated into the Rancher UI. While this change addresses a UI freeze issue, it may result in potential inaccuracies in the list of resources and resource counts under some conditions. See #3027.
Role-Based Access Control (RBAC) Framework
Features and Enhancements
- Impersonation in downstream clusters via the Rancher proxy is now supported, enabling users with the appropriate permissions to impersonate other users or ServiceAccounts. See #41988.
- It is possible to opt out of cluster owner and project owner RBAC for a newly provisioned cluster if the cluster yaml includes the annotation
field.cattle.io/no-creator-rbac: "true"
. This is useful when a service account is provisioning the cluster as service accounts can’t have RBAC applied to them. See #45591.
Virtualization (Harvester)
Features and Enhancements
- A warning banner has been added when provisioning a multi-node Harvester RKE2 cluster in Rancher that you will need to allocate one vGPU more than the number of nodes you have to avoid the “un-schedulable” errors seen after cluster updates. See #10989.
Behavior Changes
- On the Cloud Credential list, you can now easily see if a Harvester Credential is about to expire or has expired and choose to renew it. You will also be notified on the Cluster Management Clusters list when an associated Harvester Cloud Credential is about to expire or has expired. When upgrading, an existing expired Harvester Credential will not contain a warning. You can still renew the token on the resources menu. See #11332.
Windows Nodes - General
Behavior Changes
-
Rancher v2.10.0 includes changes to how Windows nodes behave post node reboot, as well as provides two new settings to control how Windows services created by Rancher behave on startup.
Two new agent environment variables have been added for Windows nodes,
CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY
andCATTLE_ENABLE_WINS_DELAYED_START
. These changes can be configured in the Rancher UI, and will be respected by all nodes runningrancher-wins
versionv0.4.20
or greater.- `CATTLE_ENABLE_WINS_SERVICE_DEPENDENCY` defines a service dependency between RKE2 and `rancher-wins`, ensuring RKE2 will not start before `rancher-wins`. - `CATTLE_ENABLE_WINS_DELAYED_START` changes the start type of `rancher-wins` to `AUTOMATIC (DELAYED)`, ensuring it starts after other Windows services.
Additionally, Windows nodes will now attempt to execute plans multiple times if the initial application fails, up to 5 times. This change, as well as appropriate use of the above two agent environment variables, aims to address plan failures for Windows nodes after a node reboot.
See #42458.
Windows Nodes in RKE2 Clusters
Behavior Changes
-
A change was made starting with RKE2 versions
v1.28.15
,v1.29.10
,v1.30.6
andv1.31.2
on Windows which allows the user to configure*_PROXY
environment variables on therke2
service after the node has already been provisioned.Previously any attempt to do so would be a no-op. With this change, If the
*_PROXY
environment variables are set on the cluster after a Windows node is provisioned, they can be automatically removed from therke2
service. However, if the variables are set before the node is provisioned, they cannot be removed.More information can be found here. A workaround is to remove the environment variables from the
rancher-wins
service and restart the service or node. At which point*_PROXY
environment variables will no longer be set on either service.Remove-ItemProperty HKLM:SYSTEM\CurrentControlSet\Services\rancher-wins -Name Environment Restart-Service rancher-wins
See #47544.
Install/Upgrade Notes
- If you’re installing Rancher for the first time, your environment must fulfill the installation requirements.
Upgrade Requirements
- Creating backups: Create a backup before you upgrade Rancher. To roll back Rancher after an upgrade, you must first back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to the same state as when the backup was created, any changes post-upgrade will not be included after the restore.
- CNI requirements:
- For Kubernetes v1.19 and later, disable firewalld as it’s incompatible with various CNI plugins. See #28840.
- When upgrading or installing a Linux distribution that uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or later, upgrade to RKE v1.19.2 or later to get Flannel v0.13.0. Flannel v0.13.0 supports nf_tables. See Flannel #1317.
- Requirements for air gapped environments:
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
NO_PROXY
. See the documentation and issue #2725. - When installing Rancher with Docker in an air-gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command, as shown in the K3s documentation. If the registry has certificates, then you’ll also need to supply those. See #28969.
- When using a proxy in front of an air-gapped Rancher instance, you must pass additional parameters to
- Requirements for general Docker installs:
- When starting the Rancher Docker container, you must use the
privileged
flag. See documentation. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container will come up and work as expected. See #33685.
- When starting the Rancher Docker container, you must use the
Versions
Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
Important: With the release of Rancher Kubernetes Engine (RKE) v1.6.0, we are informing customers that RKE is now deprecated. RKE will be maintained for two more versions, following our deprecation policy.
Please note, EOL for RKE is July 31st, 2025. Prime customers must replatform from RKE to RKE2 or K3s.
RKE2 and K3s provide stronger security, and move away from upstream-deprecated Docker machine. Learn more about replatforming here.
Images
- rancher/rancher:v2.10.0
Tools
Kubernetes Versions for RKE
- v1.31.2 (Default)
- v1.30.6
- v1.29.10
- v1.28.15
Kubernetes Versions for RKE2/K3s
- v1.31.2 (Default)
- v1.30.6
- v1.29.10
- v1.28.15
Rancher Helm Chart Versions
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Other Notes
Experimental Features
Rancher now supports the ability to use an OCI Helm chart registry for Apps & Marketplace. View documentation on using OCI based Helm chart repositories and note this feature is in an experimental stage. See #29105 and #45062
Deprecated Upstream Projects
In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #29306 for details.
Removed Legacy Features
Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.
Also, rancher-external-dns
and rancher-global-dns
have been deprecated as of the Rancher v2.7 line.
The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #6864.
UI and Backend
- CIS Scans v1 (Cluster)
- Pipelines (Project)
- Istio v1 (Project)
- Logging v1 (Project)
- RancherD
UI
- Multiclusterapps (Global): Apps within the Multicluster Apps section of the Rancher UI.
Long-standing Known Issues
Long-standing Known Issues - Cluster Provisioning
-
Not all cluster tools can be installed on a hardened cluster.
-
Rancher v2.8.1:
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
[ERROR] 000 received while downloading Rancher connection information. Sleeping for 5 seconds and trying again
. As a workaround, you can unpause the cluster by runningkubectl edit clusters.cluster clustername -n fleet-default
and setspec.unpaused
tofalse
. See #43735.
- When you attempt to register a new etcd/controlplane node in a CAPR-managed cluster after a failed etcd snapshot restoration, the node can become stuck in a perpetual paused state, displaying the error message
-
Rancher v2.7.2:
- If you upgrade or update any hosted cluster, and go to Cluster Management > Clusters while the cluster is still provisioning, the Registration tab is visible. Registering a cluster that is already registered with Rancher can cause data corruption. See #8524.
Long-standing Known Issues - RKE Provisioning
- Rancher v2.9.0:
- The Weave CNI plugin for RKE v1.27 and later is now deprecated, due to the plugin being deprecated for upstream Kubernetes v1.27 and later. RKE creation will not go through as it will raise a validation warning. See #11322.
Long-standing Known Issues - RKE2 Provisioning
- Rancher v2.9.0:
- When adding the
provisioning.cattle.io/allow-dynamic-schema-drop
annotation through the cluster config UI, the annotation disappears before adding the value field. When viewing the YAML, the respective value field is not updated and is displayed as an empty string. As a workaround, when creating the cluster, set the annotation by using the Edit Yaml option located in the dropdown â‹® attached to your respective cluster in the Cluster Management view. See #11435.
- When adding the
- Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
- Rancher v2.7.6:
- Rancher v2.7.2:
- When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI,
spec.rkeConfig.machineGlobalConfig.profile
is set tonull
, which is an invalid configuration. See #8480.
- When viewing or editing the YAML configuration of downstream RKE2 clusters through the UI,
Long-standing Known Issues - K3s Provisioning
- Rancher v2.7.6:
- Rancher v2.7.2:
- Clusters remain in an
Updating
state even when they contain nodes in anError
state. See #39164.
- Clusters remain in an
Long-standing Known Issues - Rancher App (Global UI)
- Rancher v2.9.2:
- Although system mode node pools must have at least one node, the Rancher UI allows a minimum node count of zero. Inputting a zero minimum node count through the UI can cause cluster creation to fail due to an invalid parameter error. To prevent this error from occurring, enter a minimum node count at least equal to the node count. See #11922.
- Rancher v2.7.7:
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
_
in theCluster Name
field. See #9416.
- When creating a cluster in the Rancher UI it does not allow the use of an underscore
Long-standing Known Issues - Hosted Rancher
- Rancher v2.7.5:
- The Cluster page shows the Registration tab when updating or upgrading a hosted cluster. See #8524.
Long-standing Known Issues - EKS
- Rancher v2.7.0:
- EKS clusters on Kubernetes v1.21 or below on Rancher v2.7 cannot be upgraded. See #39392.
Long-standing Known Issues - Authentication
- Rancher v2.9.0:
- There are some known issues with the OpenID Connect provider support:
- When the generic OIDC auth provider is enabled, and you attempt to add auth provider users to a cluster or project, users are not populated in the dropdown search bar. This is expected behavior as the OIDC auth provider alone is not searchable. See #46104.
- When the generic OIDC auth provider is enabled, auth provider users that are added to a cluster/project by their username are not able to access resources upon logging in. A user will only have access to resources upon login if the user is added by their userID. See #46105.
- When the generic OIDC auth provider is enabled and an auth provider user in a nested group is logged into Rancher, the user will see the following error when they attempt to create a Project:
[projectroletemplatebindings.management.cattle.io](http://projectroletemplatebindings.management.cattle.io/) is forbidden: User "u-gcxatwsnku" cannot create resource "projectroletemplatebindings" in API group "[management.cattle.io](http://management.cattle.io/)" in the namespace "p-9t5pg"
. However, the project is still created. See #46106.
- There are some known issues with the OpenID Connect provider support:
Long-standing Known Issues - Rancher Webhook
- Rancher v2.7.2:
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
- If you rollback from a version of Rancher v2.7.2 or later, to a Rancher version earlier than v2.7.2, the webhooks will remain in downstream clusters. Since the webhook is designed to be 1:1 compatible with specific versions of Rancher, this can cause unexpected behaviors to occur downstream. The Rancher team has developed a script which should be used after rollback is complete (meaning after a Rancher version earlier than v2.7.2 is running). This removes the webhook from affected downstream clusters. See #40816.
- A webhook is installed in all downstream clusters. There are several issues that users may encounter with this functionality:
Long-standing Known Issues - Harvester
- Rancher v2.9.0:
- In the Rancher UI when navigating between Harvester clusters of different versions a refresh may be required to view version specific functionality. See #11559.
- Rancher v2.7.2:
- If you’re using Rancher v2.7.2 with Harvester v1.1.1 clusters, you won’t be able to select the Harvester cloud provider when deploying or updating guest clusters. The Harvester release notes contain instructions on how to resolve this. See #3750.
Long-standing Known Issues - Backup/Restore
-
When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
-
Rancher v2.7.7:
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.
- Due to the backoff logic in various components, downstream provisioned K3s and RKE2 clusters may take longer to re-achieve