Release v2.3.0
Important
-
Please be aware that upon an upgrade to v2.3.0, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
-
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Breaking Change
-
Rancher CLI Changes -
rancher app install
no longer blocks until the app is finished installing. It will immediately exit. If you need the CLI to block until the app is finished installing, please userancher wait app install
. [#17471] -
Creating clusters with YAML - If you have saved a yaml file and use it to provision Rancher launched Kubernetes clusters, the yaml needs to be updated. Any fields that were related to the RKE options need to be nested under
rancher_kubernetes_engine_config:
. - Kubernetes 1.16 API Endpoint Changes - With Kubernetes 1.16, there are new API endpoints and potential deprecation of ones used by older Kubernetes versions. Please review your workloads and catalogs that they are compatible with Kubernetes. Rancher recommends using the experimental version of Kubernetes 1.16 that is shipped with Rancher for testing that your API endpoints will still work. [#22426]
The following versions are now latest and stable:
Type | Rancher Version | Docker Tag | Helm Repo | Helm Chart Version |
---|---|---|---|---|
Latest | v2.3.0 | rancher/rancher:latest |
server-charts/latest | v2.3.0 |
Stable | v2.2.8 | rancher/rancher:stable |
server-charts/stable | v2.2.8 |
Please review our version documentation for more details on versioning and tagging conventions.
Features and Enhancements
-
Windows GA [#16460] - Support for provisioning Windows cluster is now GA! Windows clusters are supported for Kubernetes 1.15 on Windows Server, version 1809 and 1903. Windows clusters can only be created from new clusters and is supported with the flannel network provider. You will not need to do any specific scheduling to ensure your Windows workloads are scheduled onto Windows nodes. When creating a Windows cluster, Rancher automatically adds taints to the required Linux nodes to prevent any Windows workloads to be scheduled. If you are trying to schedule Linux workloads into the cluster, you will need to add specific tolerations and node scheduling in order to have them deployed on the Linux nodes.
-
Istio [#19582] - Operating Istio in Rancher is now easier through a simplified way of installation and configuration Istio stack. The stack comes out of the box with the Kiali dashboard for traffic and telemetry visualization, Jaegar for tracing, and Prometheus/Grafana for observability. Developers can interact with all of the Istio capabilities through their everyday Kubectl workflows.
-
RKE Templates [#14337]- Users are able to create RKE templates, which are templates of how a Rancher launched Kubernetes cluster will be deployed. Admins can turn on enforcement to use these RKE templates to lock down and control what type of clusters that their users can deploy. Creation and revision of RKE templates are controlled with RBAC. When templates are created, the creator can decide which options could be overridden by a user versus using the default set by the creator.
-
Self Healing Node Pools [#15737] - Users can now set the amount of time to wait before Rancher server deletes an unresponsive node and re-create it. This new setting will ensure that the node pool of your cluster will always comprise of a specific number of active and available nodes.
-
Ability to receive new Kubernetes versions into Rancher [#18041] - Rancher can ship Kubernetes versions before shipping a Rancher server version that is packaged with it. Users will automatically receive these Kubernetes versions into their Rancher setup without upgrading Rancher server. These versions can be used to provision Rancher launched Kubernetes clusters. For air gap installations, you will need to create a mirrored registry for the metadata to be able to receive the versions.
-
Ability to add taints to nodes [#13972] - Taints are used by Kubernetes to help control which workloads are deployed onto the nodes. By introducing the ability to add taints, it provides more flexibility on how to schedule workloads to nodes. Taints can be added while registering custom nodes or while provisioning clusters using node pools or node templates.
-
Added Google as an external authentication provider [#1411] - Google can now be set up as an authentication provider. Google is only supported for Rancher setups that have an FQDN due to Google’s requirements.
-
Ability to deploy HPA [#19084] - Added ability to deploy the Horizontal Pod Autoscaler using the Rancher UI.
-
Ability to add min/max Rancher versions to catalog apps [#18469] - You can now add a min and max version of Rancher server compatibility to catalog apps. This allows more control to ensuring catalog apps are well tested and leverage features introduced in specific versions of Rancher.
-
Hosted Kubernetes Provider Enhancements
-
Ability to set the DNS policy of the ingress controller [RKE #1531]
-
Added the
—helm-wait
and—helm-timeout
options to therancher app install
command [#17471] -
Ability to configure the expiration time for the UI token [#16467]
-
Ability ability to assign annotations and labels to clusters [#21700]
-
Added the ability to create a Windows safe timestamp for backups [#22019]
Experimental Features
We have introduced the ability to turn on and off experimental components inside Rancher. Please refer to our docs on how to turn on the features.
Major Bugs Fixed Since v2.2.8
- Fixed an issue where EKS would not allow less than 3 worker nodes [#18243]
- Fixed a UI issue where users were unable to select custom project roles [#19860]
- Fixed an issue where non-admins were unable to see revisions of an application [#20204]
- Fixed an issue where CLI did not honor application rollbacks [#20750]
- Fixed an issue where monitoring was not respecting node selectors [#22760]
- Fixed an issue where calico health checks would break after editing the calico-node deployment in the UI [#22848]
- Fixed an issue where you could not update/edit ingress names if it contained a period. [#20809]
- Fixed an issue where it wasn’t clear on the edit screen if you were attempting to edit an imported cluster [#21562]
- Fixed an issuer where etcd snapshots weren’t recognizing the https_proxt settings from docker [RKE #1369]
- Fixed an issue where Azure node pools wouldn’t support VM managed disks [#15788]
- Fixed an issue where upgrading an app using the CLI that had no answers to add in answers wouldn’t work [#17405]
- Fixed an issue where node port mappings for imported clusters weren’t displaying correctly [#15482]
- Fixed an issue where logging and monitoring were not working with clusters that had enabled the
restricted
pod security policy [#22593, #21082] - Fixed an issue where backups to buckets in the AWS ap-east-1 region fail [#22355]
- Fixed an issue where the Rancher server chart wasn’t respecting the external TLS option [#20573]
- Fixed an issue where slack notifiers were not sent when a pod was restarted [#20339]
- Fixed an issue where during
rancher app upgrade
, previous answers were being lost in the application [#17540] - Fixed an issue where the disk utilization was reporting incorrectly [#21254
- Fixed an issue where testing fluentd logging would produce errors [#20120]
- Fixed an issue where pipelines wouldn’t deploy a catalog application from a project scoped catalog after the attempting to upgrade the app [#21282]
- Fixed an issue where fluentd would consume all nodes by upgrading to the latest version of fluentd [#22689]
- Fixed an issue where Rancher catalogs couldn’t launch catalogs dependent on other charts [#18535]
- Fixed an issue with handling notifiers when there is only one notifier [#18616]
- Fixed an issue where the deploying a catalog application in the UI that had answers that were expecting a string would not be able to handle a numbered string [#13158
- Fixed an issue where creating clusters using Rancher CLI would lose the cloud provider information [#15098]
- Fixed many UI bugs and enhancements
Other notes
Air Gap Installations and Upgrades
In v2.3.0, an air gap install no longer requires mirroring the systems chart git repo. Please follow the directions on how to install Rancher to use the packaged systems chart.
Known Major Issues
- Windows Limitations - There are a couple of known limitations with Windows due to upstream issues:
- Windows pods cannot access the Kubernetes API when using VXLAN (Overlay) backend for the flannel network provider. The workaround is to use the Host Gateway (L2bridge) backend for the flannel network provider. [#20968]
- Logging only works on Host Gateway (L2bridge) backend for the flannel network provider [#20510]
- Istio Limitation - Istio will not work with a restricted pod security policy [#22469]
- HPA Limitation - HPA UI doesn’t work on GKE clusters as GKE doesn’t support the
v2beta2.autoscaling
API [#22292] - Hardening Guide Limitations - If you have used Rancher’s hardening guide, there are some known issues
- Adding taints to existing node templates from an upgraded setup will not be applied unless a reconcile is triggered on the cluster. When scaling up/down worker nodes, no reconcile is triggered, but scaling up/down either control plane/etcd nodes or editing a cluster (like upgrading to the latest Kubernetes version) would update to support taints on the nodes. [#22672]
- Cluster alerting and logging can get stuck in
Updating
state after upgrading Rancher. Workaround steps are provided in the issue [21480] - If you have Rancher cluster with OpenStack cloud provider having LoadBalancer set, and the cluster was provisioned on version 2.2.3 or less, the upgrade to the Rancher version v2.2.4 and up will fail. Steps to mitigate can be found in the comment to [20699]
Versions
Images
- rancher/rancher:v2.3.0
- rancher/rancher-agent:v2.3.0
Tools
Kubernetes
Upgrades and Rollbacks
Rancher supports both upgrade and rollback. Please note the version you would like to upgrade or rollback to change the Rancher version.
Please be aware that upon an upgrade to v2.3.0+, any edits to a Rancher launched Kubernetes cluster will cause all system components to restart due to added tolerations to Kubernetes system components. Plan accordingly.
Recent changes to cert-manager require an upgrade if you have an HA install of Rancher using self-signed certificates. If you are using cert-manager older than v0.9.1, please see the documentation on how to upgrade cert-manager.
Important: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected.