Release v2.6.5
It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.
In Rancher v2.6.4, the cluster-api module has been upgraded from v0.4.4 to v1.0.2 in which the apiVersion of CAPI CRDs are upgraded from
cluster.x-k8s.io/v1alpha4
tocluster.x-k8s.io/v1beta1
. This has the effect of causing rollbacks from Rancher v2.6.4 to any previous version of Rancher v2.6.x to fail because the previous version the CRDs needed to roll back are no longer available in v1beta1. To avoid this, the Rancher resource cleanup script should be run before the restore or rollback is attempted. This script can be found in the rancherlabs/support-tools repo and the usage of the script can be found in the backup-restore operator docs. In addition, when users roll back Rancher on the same cluster using the Rancher Backup and Restore app in 2.6.4+, the updated steps to create the Restore Custom Resource must be followed. See also #36803 for more details.
Features and Enhancements
New Integration with Rancher: NeuVector Security Platform
Rancher 2.6.5 introduces NeuVector, the first open-source container-centric security platform, as a new integration. NeuVector can be enabled through a Helm chart that may be installed either through Apps & Marketplace or through the Cluster Tools button in Cluster Explorer in the UI. Once NeuVector is enabled, users can deploy and manage NeuVector clusters within Rancher. See the Neuvector documentation for more information on deploying and managing NeuVector through Rancher. Refer also to the Rancher documentation for more.
- Features
- Provides real-time compliance, visibility, and protection for critical apps and data.
- Supports scanning of SUSE Linux operating systems and SUSE Rancher Kubernetes distributions (RKE1 and RKE2).
- Features built-in navigation to deploy the NeuVector console using single sign-on (SSO).
- Installation Details
- The integration with Rancher works with NeuVector 5.0.0 or higher only at this time.
- NeuVector container images are available for installation from the Rancher Apps & Marketplace.
- NeuVector deployment will deploy containers into the
cattle-neuvector-system
namespace.
- Installation Recommendations
- When NeuVector is installed through the Rancher chart, users can log in through Rancher to log in directly to the NeuVector console. It is highly recommended to log into NeuVector directly and modify the default admin password.
- The NeuVector vulnerability scanner image is released daily, to include the latest security advisory update. In addition, the scanner image is mirrored into the Rancher registry at
rancher/mirrored-neuvector-scanner
daily. It is recommended that you mirror the scanner image into your private registry, if needed, based on your schedule.
- Support Limitations
- Only admins and cluster owners are currently supported.
- Fleet multi-cluster deployment is not supported.
- NeuVector is not supported on clusters with Windows nodes.
- NeuVector installation is not supported on hardened clusters.
- NeuVector installation is not supported on SELinux clusters.
- Other Limitations
- Previous deployments from Rancher, such as from our Partners chart repository or the primary NeuVector Helm chart, must be completely removed in order to update to the new integrated feature chart. See #37447.
- When NeuVector is deployed in an air-gapped Rancher setup, the NeuVector container will not be regularly updated and will only contain the database of CVEs at the time the images are pulled into your own private registry.
- Container runtime is not auto-detected for different cluster types when installing the NeuVector chart. To work around this, you can specify the runtime manually. See #37481.
- Sometimes when the controllers are not ready, the NeuVector UI is not accessible from the Rancher UI. During this time, controllers will try to restart, and it takes a few minutes for the controllers to be active. See #37400.
New in Project Monitoring v2
Project Monitoring v2, also known as Prometheus Federator, is now available and supported.
- Monitoring v2 is now at parity with Monitoring v1. Please note that Monitoring v1 was deprecated in Rancher v2.5 and will be removed in an upcoming release.
- Project Monitoring v2 introduces a custom resource called ProjectHelmCharts. This custom resource solves a problem where, if you are a project owner, you may not have permission to install/upgrade real Helm charts, but you may still need to configure monitoring across the namespaces in your project. With this, you may now create Project Monitors to enable monitoring in projects.
- Users can deploy Monitoring v2 through Rancher’s Apps & Marketplace.
- Limitations
- When enabling Prometheus Federator on an RKE2 cluster, the embedded Helm controller in Prometheus Federator should be disabled in favor of using the Helm controller embedded into RKE2 that is responsible for managing the state of internal Kubernetes components (since the RKE2 embedded Helm controller has a global scope in implementing HelmChart resources in the cluster). This can be provided on installing the chart by setting
helmProjectOperator.helmController.enabled=false
and is exposed as an option on the chart installation page’s UI on Apps & Marketplace. See #37694. - At this time, there are no migration instructions from Monitoring v2 to Project Monitoring v2. The existing instructions for migrating from Monitoring v1 to Monitoring v2 will be updated in the next release.
- When enabling Prometheus Federator on an RKE2 cluster, the embedded Helm controller in Prometheus Federator should be disabled in favor of using the Helm controller embedded into RKE2 that is responsible for managing the state of internal Kubernetes components (since the RKE2 embedded Helm controller has a global scope in implementing HelmChart resources in the cluster). This can be provided on installing the chart by setting
New in RKE2 Provisioning
RKE2 provisioning is now GA for Kubernetes v1.22 and up.
- New S3 snapshot and restore feature added. See #34417.
- RKE2 Windows Clusters
- Support Limitations
- There is currently no support for Encryption Key Rotation functionality. See #35436.
New in Rancher
- Rancher on IBM Z is now in tech preview.
New in the Rancher UI
- Rancher Dashboard
- Ability to scale workload up and down from the Workload detail page has been added. See #5114.
- NeuVector
- RKE2
- Known Issues
- After installing an app from a partner chart repo, the partner chart will upgrade to feature charts if the chart also exists in the feature charts default repo. See #5655.
- In some instances under Users and Authentication, no users are listed and clicking Create to create a new user does not display the entire form. To work around this when encountered, perform a hard refresh to be able to log back in. See Dashboard #5336.
New in RKE1 and RKE2
- RKE2 Windows
- The
system-agent
uninstall script now has Linux and Windows feature parity. See #171.
- The
- RKE1 Windows
- Note that on September 1, 2022, RKE1 Windows will be end-of-life (EOL). For more information, see #179.
- Behavior Changes in RKE2 Clusters
- Custom clusters in RKE2 and K3s will get to an active state before adding worker nodes. This is a behavior change from RKE1, which depends on worker nodes to schedule CoreDNS. See #37017.
cluster state
changes toProvisioning
when a worker node is deleted in an RKE2 cluster, which is expected behavior. In RKE1, the cluster state remainsActive
when a snapshot is triggered or a worker node is deleted. See #36689.cluster state
changes toProvisioning
orUpdating
when a snapshot is taken in an RKE2 cluster, which is expected behavior. In RKE1, Rancher is responsible for taking the snapshots. See #36504.
- Known Issues
- RKE2 cluster provisioning fails when using the RHEL 8.5 golden public AWS AMI from Rancher. See #36731. To work around this issue, please see this note.
- RKE2 snapshots display different sizes and are working as expected. See #36713.
- The RKE2 cluster name cannot exceed 63 characters. In addition, removing such clusters from Rancher because they fail to provision causes Rancher server to crash. See #37544.
- OPA Gatekeeper gets stuck when uninstalling on Windows clusters. Note that this applies to both RKE1 and RKE2 clusters. See #37029. A fix is scheduled for 2.6.6.
- Any RKE2 Windows cluster created prior to v2.6.5 through the provisioning v2 framework cannot be upgraded using v2.6.5. Only RKE2 Windows clusters provisioned on v2.6.5+ can be upgraded. See #76.
- The Windows agent for RKE2 Windows nodes does not support auto-upgrades at this time. This functionality is planned for v2.6.6. See #181.
- Windows worker nodes in RKE2 clusters that require the use of a proxy for downstream nodes will not successfully provision. This issue exists only in the system-agent implementation for RKE2 Windows, as it is unable to use a proxy when attempting to pull images while applying a plan. See #37688.
- The Calico CNI will not run on SLE Micro without additional configuration as the filesystem is read-only, and Calico tries to create a flexVolume with a path at
/usr/libexec
. Note that this affects v1.23.6+rke2r2 and v1.22.9+rke2r2 as well as earlier RKE2 versions, but this may be fixed in newer versions that can be added to Rancher via a KDM update. To view the workaround and this issue, refer to #2886. - When Rancher is air-gapped, Windows worker nodes in RKE2 clusters will not be able to successfully provision. This issue occurs when a custom CA certificate is required when making calls in Go on Windows when a restconfig needs to be built, and it requires a custom CA certificate, which is part of the system-agent installation flow for RKE2 Windows. See #37695.
Major Bug Fixes
- RKE2 node driver cluster provisions successfully in both Kubernetes v1.22.x and v1.23.x. See #36939.
- On the Cluster Management page, snapshot-related actions such as create/restore and
rotate certificate
are now available for a standard user in RKE1. See Dashboard #5011. - Cluster usage metrics have been removed from the Rancher homepage cluster list. Users may still check usage per cluster by using the cluster dashboard. See #5430.
- When performing a backup/restore using Helm, the command will now work as expected if Let’s Encrypt is used. See #37060.
- If you set restricted PSP as default to cluster and create a new namespace with an unrestricted PSP, the pod/deployment creation in the new project no longer fails. See #37443.
- A warning message is now present as expected to state that the monitoring and logging apps must be upgraded to deploy on newly added Windows nodes. See #5530.
- After installing Monitoring v2, the rancher-monitoring-crd now successfully installs. See #35744.
Install/Upgrade Notes
- If you are installing Rancher for the first time, your environment must fulfill the installation requirements.
- The namespace where the local Fleet agent runs has been changed to
cattle-fleet-local-system
. This change does not impact GitOps workflows.
Upgrade Requirements
- Creating backups: We strongly recommend creating a backup before upgrading Rancher. To roll back Rancher after an upgrade, you must back up and restore Rancher to the previous Rancher version. Because Rancher will be restored to its state when a backup was created, any changes post upgrade will not be included after the restore. For more information, see the documentation on backing up Rancher.
- Helm version: Rancher install or upgrade must occur with Helm 3.2.x+ due to the changes with the latest cert-manager release. See #29213.
- Kubernetes version:
- The local Kubernetes cluster for the Rancher server should be upgraded to Kubernetes 1.18+ before installing Rancher 2.6+.
- When using Kubernetes v1.21 with Windows Server 20H2 Standard Core, the patch “2019-08 Servicing Stack Update for Windows Server” must be installed on the node. See #72.
- CNI requirements:
- For Kubernetes v1.19 and newer, we recommend disabling firewalld as it has been found to be incompatible with various CNI plugins. See #28840.
- If upgrading or installing to a Linux distribution which uses nf_tables as the backend packet filter, such as SLES 15, RHEL 8, Ubuntu 20.10, Debian 10, or newer, users should upgrade to RKE1 v1.19.2 or later to get Flannel version v0.13.0 that supports nf_tables. See Flannel #1317.
- For users upgrading from
>=v2.4.4
tov2.5.x
with clusters where ACI CNI is enabled, note that upgrading Rancher will result in automatic cluster reconciliation. This is applicable for Kubernetes versionsv1.17.16-rancher1-1
,v1.17.17-rancher1-1
,v1.17.17-rancher2-1
,v1.18.14-rancher1-1
,v1.18.15-rancher1-1
,v1.18.16-rancher1-1
, andv1.18.17-rancher1-1
. Please refer to the workaround BEFORE upgrading tov2.5.x
. See #32002.
- Requirements for air gapped environments:
- For installing or upgrading Rancher in an air gapped environment, please add the flag
--no-hooks
to thehelm template
command to skip rendering files for Helm’s hooks. See #3226. - If using a proxy in front of an air gapped Rancher, you must pass additional parameters to
NO_PROXY
. See the documentation and related issue #2725.
- For installing or upgrading Rancher in an air gapped environment, please add the flag
- Cert-manager version requirements: Recent changes to cert-manager require an upgrade if you have a high-availability install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager. See documentation.
- Requirements for Docker installs:
- When starting the Rancher Docker container, the privileged flag must be used. See documentation.
- When installing in an air gapped environment, you must supply a custom
registries.yaml
file to thedocker run
command as shown in the K3s documentation. If the registry has certificates, then you will need to also supply those. See #28969. - When upgrading a Docker installation, a panic may occur in the container, which causes it to restart. After restarting, the container comes up and is working as expected. See #33685.
Rancher Behavior Changes
- Cert-Manager:
- Rancher now supports cert-manager versions 1.6.2 and 1.7.1. We recommend v1.7.x because v 1.6.x will reach end-of-life on March 30, 2022. To read more, see the documentation.
- When upgrading Rancher and cert-manager, you will need to use Option B: Reinstalling Rancher and cert-manager from the Rancher docs.
- There are several versions of cert-manager which, due to their backwards incompatibility, are not recommended for use with Rancher. You can read more about which versions are affected by this issue in the cert-manager docs. As a result, only versions 1.6.2 and 1.7.1 are recommended for use at this time.
- For instructions on upgrading cert-manager from version 1.5 to 1.6, see the relevant cert-manager docs.
- For instructions on upgrading cert-manager from version 1.6 to 1.7, see the relevant cert-manager docs.
- Readiness and Liveness Check:
- Users can now configure the
Readiness Check
andLiveness Check
ofcoredns-autoscaler
. See #24939.
- Users can now configure the
- Legacy Features:
- Users upgrading from Rancher <=v2.5.x will automatically have the
--legacy
feature flag enabled. New installations that require legacy features need to enable the flag on install or through the UI. - When workloads created using the legacy UI are deleted, the corresponding services are not automatically deleted. Users will need to manually remove these services. A message will be displayed notifying the user to manually delete the associated services when such a workload is deleted. See #34639.
- Users upgrading from Rancher <=v2.5.x will automatically have the
- Library and Helm3-Library Catalogs:
- Users will no longer be able to launch charts from the library and helm3-library catalogs, which are available through the legacy apps and multi-cluster-apps pages. Any existing legacy app that was deployed from a previous Rancher version will continue to be able to edit its currently deployed chart. Note that the Longhorn app will still be available from the library for new installs but will be removed in the next Rancher version. All users are recommended to deploy Longhorn from the Apps & Marketplace section of the Rancher UI instead of through the Legacy Apps pages.
- Local Cluster:
- In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server’s local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting
hide_local_cluster
to true from the v3/settings API. See the documentation and #29325. For more information on upgrading from Rancher with a hidden local cluster, see the documentation.
- In older Rancher versions, the local cluster could be hidden to restrict admin access to the Rancher server’s local Kubernetes cluster, but that feature has been deprecated. The local Kubernetes cluster can no longer be hidden and all admins will have access to the local cluster. If you would like to restrict permissions to the local cluster, there is a new restricted-admin role that must be used. The access to local cluster can now be disabled by setting
- Upgrading the Rancher UI:
- After upgrading to
v2.6+
, users will be automatically logged out of the old Rancher UI and must log in again to access Rancher and the new UI. See #34004.
- After upgrading to
- Fleet:
- For users upgrading from
v2.5.x
tov2.6.x
, note that Fleet will be enabled by default as it is required for operation inv2.6+
. This will occur even if Fleet was disabled inv2.5.x
. During the upgrade process, users will observe restarts of therancher
pods, which is expected. See #31044 and #32688. - Starting with Rancher v2.6.1, Fleet allows for two agents in the local cluster for scenarios where “Fleet is managing Fleet”. The true local agent runs in the new
cattle-fleet-local-system
namespace. The agent downstream from another Fleet management cluster runs incattle-fleet-system
, similar to the agent pure downstream clusters. See #34716 and #531.
- For users upgrading from
- Editing and Saving Clusters:
- For users upgrading from
<=v2.4.8 (<= RKE v1.1.6)
tov2.4.12+ (RKE v1.1.13+)
/v2.5.0+ (RKE v1.2.0+)
, please note that Edit and save cluster (even with no changes or a trivial change like cluster name) will result in cluster reconciliation and upgradingkube-proxy
on all nodes because of a change inkube-proxy
binds. This only happens on the first edit and later edits shouldn’t affect the cluster. See #32216.
- For users upgrading from
- EKS Cluster:
- There is currently a setting allowing users to configure the length of refresh time in cron format:
eks-refresh-cron
. That setting is now deprecated and has been migrated to a standard seconds format in a new setting:eks-refresh
. If previously set, the migration will happen automatically. See #31789.
- There is currently a setting allowing users to configure the length of refresh time in cron format:
- System Components:
- Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
- GKE and AKS Clusters:
- Existing GKE and AKS clusters and imported clusters will continue to operate as-is. Only new creations and registered clusters will use the new full lifecycle management.
- Rolling Back Rancher:
- The process to roll back Rancher has been updated for versions v2.5.0 and above. New steps require scaling Rancher down to 0 replica before restoring the backup. Please refer to the documentation for the new instructions.
- RBAC:
- Due to the change of the provisioning framework, the
Manage Nodes
role will no longer be able to scale up/down machine pools. The user would need the ability to edit the cluster to manage the machine pools #34474.
- Due to the change of the provisioning framework, the
- Azure Cloud Provider for RKE2:
- For RKE2, the process to set up an Azure cloud provider is different than for RKE1 clusters. Users should refer to the documentation for the new instructions. See #34367 for original issue.
- Machines vs. Kube Nodes:
- In previous versions, Rancher only displayed Nodes, but with v2.6, there are the concepts of
machines
andkube nodes
. Kube nodes are the Kubernetes node objects and are only accessible if the Kubernetes API server is running and the cluster is active. Machines are the cluster’s machine object which defines what the cluster should be running.
- In previous versions, Rancher only displayed Nodes, but with v2.6, there are the concepts of
- Rancher’s External IP Webhook:
- In v1.22, upstream Kubernetes has enabled the admission controller to reject usage of external IPs. As such, the
rancher-external-ip-webhook
chart that was created as a workaround is no longer needed, and support for it is now capped to Kubernetes v1.21 and below. See #33893.
- In v1.22, upstream Kubernetes has enabled the admission controller to reject usage of external IPs. As such, the
- Memory Limit for Legacy Monitoring:
- The default value of the Prometheus memory limit in the legacy Rancher UI is now 2000Mi to prevent the pod from restarting due to a OOMKill. See #34850.
- Memory Limit for Monitoring:
- The default value of the Prometheus memory limit in the new Rancher UI is now 3000Mi to prevent the pod from restarting due to a OOMKill. See #34850.
Versions
Please refer to the README for latest and stable versions.
Please review our version documentation for more details on versioning and tagging conventions.
Images
- rancher/rancher:v2.6.5
Tools
Kubernetes Versions
- v1.23.6 (Default)
- v1.22.9
- v1.21.12
- v1.20.15
- v1.19.16
- v1.18.20
Rancher Helm Chart Versions
Starting in 2.6.0, many of the Rancher Helm charts available in the Apps & Marketplace will start with a major version of 100. This was done to avoid simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also brings us into compliance with semver, which is a requirement for newer versions of Helm. You can now see the upstream version of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Other Notes
Feature Flags
Feature flags introduced in 2.6.0 and the Harvester feature flag introduced in 2.6.1 are listed below for reference:
Feature Flag | Default Value | Description |
---|---|---|
harvester |
true |
Used to manage access to the Harvester list page where users can navigate directly to Harvester host clusters and have the ability to import them. |
fleet |
true |
The previous fleet feature flag is now required to be enabled as the fleet capabilities are leveraged within the new provisioning framework. If you had this feature flag disabled in earlier versions, upon upgrading to Rancher, the flag will automatically be enabled. |
gitops |
true |
If you want to hide the “Continuous Delivery” feature from your users, then please use the newly introduced gitops feature flag, which hides the ability to leverage Continuous Delivery. |
rke2 |
true |
Used to enable the ability to provision RKE2 clusters. By default, this feature flag is enabled, which allows users to attempt to provision these type of clusters. |
legacy |
false for new installs, true for upgrades |
There are a set of features from previous versions that are slowly being phased out of Rancher for newer iterations of the feature. This is a mix of deprecated features as well as features that will eventually be moved to newer variations in Rancher. By default, this feature flag is disabled for new installations. If you are upgrading from a previous version, this feature flag would be enabled. |
token-hashing |
false |
Used to enable new token-hashing feature. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically using the SHA256 algorithm. Once a token is hashed it cannot be undone. Once this feature flag is enabled it cannot be disabled. |
Experimental Features
-
Dual-stack and IPv6-only support for RKE1 clusters using the Flannel CNI will be experimental starting in v1.23.x. See the upstream Kubernetes docs. Dual-stack is not currently supported on Windows. See #165.
-
RancherD was introduced as part of Rancher v2.5.4 through v2.5.10 as an experimental feature but is now deprecated. See #33423.
Legacy Features
Legacy features are features hidden behind the legacy
feature flag, which are various features/functionality of Rancher that was available in previous releases. These are features that Rancher doesn’t intend for new users to consume, but if you have been using past versions of Rancher, you’ll still want to use this functionality.
When you first start 2.6, there is a card in the Home page that outlines the location of where these features are now located.
The deprecated features from v2.5 are now behind the legacy
feature flag. Please review our deprecation policy for questions.
The following legacy features are no longer supported on Kubernetes v1.21+ clusters:
- Logging
- CIS Scans
- Istio 1.5
- Pipelines
The following legacy feature is no longer supported past Kubernetes v1.21+ clusters:
- Monitoring v1
Known Major Issues
- Kubernetes Cluster Distributions:
- RKE:
- RKE2:
- Amazon ECR Private Registries are not functional. See #33920.
- When provisioning using a RKE2 cluster template, the
rootSize
for AWS EC2 provisioners does not currently take an integer when it should, and an error is thrown. To work around this issue, wrap the EC2rootSize
in quotes. See Dashboard #3689. - RKE2 node driver cluster gets stuck in provisioning state after an upgrade to v2.6.4 and rollback to v2.6.3. See #36859.
- RKE2 node driver cluster has its nodes redeployed when upgrading Rancher from v2.6.3 to v2.6.4. See #36627.
- The communication between the ingress controller and the pods doesn’t work when you create an RKE2 cluster with Cilium as the CNI and activate project network isolation. See documentation and #34275.
- RKE2 - Windows:
- In v2.6.5, v1.21.x of RKE2 will remain experimental and unsupported for RKE2 Windows. End users should not use v1.21.x of RKE2 for any RKE2 cluster that will have Windows worker nodes. This is due to an upstream Calico bug that was not backported to the minor version of Calico (3.19.x) that is present in v1.21.x of RKE2. See #131.
- CSI Proxy for Windows will now work in an air-gapped environment.
- NodePorts do not work on Windows Server 2022 in RKE2 clusters due to a Windows kernel bug. See #159.
- When upgrading Windows nodes in RKE2 clusters via the Rancher UI, Windows worker nodes will require a reboot after the upgrade is completed. See #37645.
- AKS:
- When editing or upgrading the AKS cluster, do not make changes from the Azure console or CLI at the same time. These actions must be done separately. See #33561.
- Windows node pools are not currently supported. See #32586.
- Azure Container Registry-based Helm charts cannot be added in Cluster Explorer, but do work in the Apps feature of Cluster Manager. Note that when using a Helm chart repository, the
disableSameOriginCheck
setting controls when credentials are attached to requests. See documentation and #34584 for more information.
- GKE:
- Basic authentication must be explicitly disabled in GCP before upgrading a GKE cluster to 1.19+ in Rancher. See #32312.
- AWS:
- On RHEL8.4 SELinux in AWS AMI, Kubernetes v1.22 fails to provision on AWS. As Rancher will not install RPMs on the nodes, users may work around this issue either by using AMI with this package already installed, or by installing AMI via cloud-init. Users will encounter this issue on upgrade to v1.22 as well. When upgrading to 1.22, users must manually upgrade/install the rancher-selinux package on all the nodes in the cluster, then upgrade the Kubernetes version. See #36509.
- Infrastructures:
- vSphere:
PersistentVolumes
are unable to mount to custom vSphere hardened clusters using CSI charts. See #35173.
- vSphere:
- Harvester:
- Upgrades from Harvester v0.3.0 are not supported.
- Deploying Fleet to Harvester clusters is not yet supported. Clusters, whether Harvester or non-Harvester, imported using the Virtualization Management page will result in the cluster not being listed on the Continuous Delivery page. See #35049.
- Cluster Tools:
- Fleet:
- Multiple
fleet-agent
pods may be created and deleted during initial downstream agent deployment; rather than just one. This resolves itself quickly, but is unintentional behavior. See #33293.
- Multiple
- Hardened clusters:
- Not all cluster tools can currently be installed on a hardened cluster.
- Rancher Backup:
- When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
- When running a newer version of the rancher-backup app to restore a backup made with an older version of the app, the
resourceSet
namedrancher-resource-set
will be restored to an older version that might be different from the one defined in the current running rancher-backup app. The workaround is to edit the rancher-backup app to trigger a reconciliation. See #34495. - Because Kubernetes v1.22 drops the apiVersion
apiextensions.k8s.io/v1beta1
, trying to restore an existing backup file into a v1.22 cluster will fail because the backup file contains CRDs with the apiVersion v1beta1. There are two options to work around this issue: update the defaultresourceSet
to collect the CRDs with the apiVersion v1, or update the defaultresourceSet
and the client to use the new APIs internally. See documentation and #34154.
- Monitoring:
- Deploying Monitoring on a Windows cluster with win_prefix_path set requires users to deploy Rancher Wins Upgrader to restart wins on the hosts to start collecting metrics in Prometheus. See #32535.
- Logging:
- Windows nodeAgents are not deleted when performing helm upgrade after disabling Windows logging on a Windows cluster. See #32325.
- Istio Versions:
- Istio 1.12 and below do not work on Kubernetes 1.23 clusters. To use the Istio charts, please do not update to Kubernetes 1.23 until the next charts’ release.
- Istio 1.5 is not supported in air-gapped environments. Please note that the Istio project has ended support for Istio 1.5.
- Istio 1.9 support ended on October 8th, 2021.
- The Kiali dashboard bundled with 100.0.0+up1.10.2 errors on a page refresh. Instead of refreshing the page when needed, simply access Kiali using the dashboard link again. Everything else works in Kiali as expected, including the graph auto-fresh. See #33739.
- In Istio v1.10.4, Kubernetes IP service is set to default IP, which does not work for all environments. To work around this issue, install Istio version 100.1.0+up1.11.4 in the downstream cluster, and installation will complete successfully. Note that the new install will not include the Kiali CRD. See #35339.
- As part of the upgrade to Istio 1.11.4, Kiali was upgraded to 1.41 which removed the CRD installation. If you upgraded from a previous version of
rancher-istio
, you will need to manually delete therancher-kiali-server-crd
found on the installed apps page, since it is no longer in use after the upgrade completes. See #35686. - A
failed calling webhook "validation.istio.io"
error will occur in air gapped environments if theistiod-istio-system
ValidatingWebhookConfiguration
exists, and you attempt a fresh install of Istio 1.11.x and higher. To work around this issue, run the commandkubectl delete validatingwebhookconfiguration istiod-istio-system
and attempt your install again. See #35742. - Deprecated resources are not automatically removed and will cause errors during upgrades. Manual steps must be taken to migrate and/or cleanup resources before an upgrade is performed. See #34699.
- Applications injecting Istio sidecars, fail on SELinux RHEL 8.4 enabled clusters. A temporary workaround for this issue is to run the following command on each cluster node before creating a cluster:
mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni
. See #33291.
- Legacy Monitoring:
- The Grafana instance inside Cluster Manager’s Monitoring is not compatible with Kubernetes v1.21. To work around this issue, disable the
BoundServiceAccountTokenVolume
feature in Kubernetes v1.21 and above. Note that this workaround will be deprecated in Kubernetes v1.22. See #33465. - In air gapped setups, the generated
rancher-images.txt
that is used to mirror images on private registries does not contain the images required to run Legacy Monitoring which is compatible with Kubernetes v1.15 clusters. If you are running Kubernetes v1.15 clusters in an air gapped environment, and you want to either install Legacy Monitoring or upgrade Legacy Monitoring to the latest that is offered by Rancher for Kubernetes v1.15 clusters, you will need to take one of the following actions:- Upgrade the Kubernetes version so that you can use v0.2.x of the Monitoring application Helm chart.
- Manually import the necessary images into your private registry for the Monitoring application to use.
- When deploying any downstream cluster, Rancher logs errors that seem to be related to Monitoring even when Monitoring is not installed onto either cluster; specifically, Rancher logs that it
failed on subscribe
to the Prometheus CRs in the cluster because it is unable to get the resourceprometheus.meta.k8s.io
. These logs appear in a similar fashion for other Prometheus CRs (namely Alertmanager, ServiceMonitors, and PrometheusRules), but do not seem to cause any other major impact in functionality. See #32978. - Legacy Monitoring does not support Kubernetes v1.22 due to the
feature-gates
flag no longer being supported. See #35574. - After performing an upgrade to Rancher v2.6.3 from v2.6.2, the Legacy Monitoring custom metric endpoint stops working. To work around this issue, delete the service that is being targeted by the servicemonitor and allow it to be recreated; this will reload the pods that need to be targeted on a service sync. See #35790.
- The Grafana instance inside Cluster Manager’s Monitoring is not compatible with Kubernetes v1.21. To work around this issue, disable the
- Fleet:
- Docker Installations:
- UI issues may occur due to a longer startup time. User will receive an error message when launching Docker for the first time #28800, and user is directed to username/password screen when accessing the UI after a Docker install of Rancher. See #28798.
- On a Docker install upgrade and rollback, Rancher logs will repeatedly display the messages “Updating workload
ingress-nginx/nginx-ingress-controller
” and “Updating servicefrontend
with public endpoints”. Ingresses and clusters are functional and active, and logs resolve eventually. See #35798. - Rancher single node wont start on Apple M1 devices with Docker Desktop 4.3.0 or newer. See #35930.
- Rancher UI:
- Deployment securityContext section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy Support is enabled. See #4815.
- Legacy UI:
- When using the Rancher v2.6 UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port will not be created upon saving. To work around this issue, repeat the procedure to add the port again. Users will notice the Service Type field will display as
Do not create a service
. Change this to ClusterIP and upon saving, the new port will be created successfully during this subsequent attempt. See #4280.
- When using the Rancher v2.6 UI to add a new port of type ClusterIP to an existing Deployment created using the legacy UI, the new port will not be created upon saving. To work around this issue, repeat the procedure to add the port again. Users will notice the Service Type field will display as