Release v2.2.5
Important notes
- This release addresses a security vulnerability found in Rancher. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva, and applies to Rancher versions v2.0.0-v2.0.15, v2.1.0-v2.1.10, v2.2.0-v2.2.4. The fix for this vulnerability is also available in Rancher v2.1.11, and v2.0.16. Rancher v1.6 is not affected. The vulnerability is known as a Cross-Site Websocket Hijacking attack. This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. You can view the official CVE here [CVE-2019-13209]
As a result, the following versions are now latest and stable:
Type | Rancher Version | Docker Tag | Helm Repo | Helm Chart Version |
---|---|---|---|---|
Latest | v2.2.5 | rancher/rancher:latest |
server-charts/latest | v2.2.5 |
Stable | v2.2.5 | rancher/rancher:stable |
server-charts/stable | v2.2.5 |
Please review our version documentation for more details on versioning and tagging conventions.
- This release enables experimental support for Kubernetes v1.15 and official support for Kubernetes v1.14.
Features and Enhancements
- Added official support for Kubernetes v1.14 [20873]
- Added experimental support for Kubernetes v1.15 [21088]
- Added support for CoreDNS as a default dns provider in Kubernetes clusters of version v1.14 and up [20872]
- Added support to expose certificate expiration info in UI for Rancher provisioned clusters and alert if the certificates expire in 30 days or less [20994]
- Added support for custom CA in snapshots configuration for Rancher provisioned clusters to allow S3 snapshot service to trust internally signed certs [21186]
- Added Kubernetes v1.13 support for EKS clusters [21147]
- Added support for Mumbai, London, Paris regions for EKS clusters [21160]
Major Bug Fixed Since v2.2.4
The following is a list of the major bugs fixed. Review our milestone to see the full list.
- Fixed an issue when project members weren’t displayed in the UI after an upgrade to Rancher v2.2.4 [20825]
- Fixed an issue when node driver machine provisioning could fail with “something went wrong running an SSH command” error [20753]
- Fixed an issue when Etcd snapshots could timeout with Minio configured as a backup target [19496]
- Fixed an issue when if you set the HTTP_PROXY and HTTPS_PROXY environment variables in your rancher-server container to allow it to reach the public Internet, you were not be able to provision nodes using Rancher’s node driver functionality [20709]
- Fixed an issue when removing etcd member could sometime result in a broken etcd in a Rancher provisioned cluster [19696]
- Fixed an issue when helm was timing out on app updates [20289]
- Fixed an issue when Rancher server would crash every five minutes in setups with etcd snapshots configured [20964]
- Fixed an issue when you could not repair a catalog app having a bad answer, once the answer was saved [21027]
- Fixed an issue when Rancher provisioned clusters with Azure cloud config having Service Principal with multiple subscriptions failed to operate [21124]
- Fixed an issue when regular user couldn’t list his multi cluster app’s revisions [20919]
- Fixed an issue when http_proxy parameter wasn’t respected by an alert notifier [20926]
Other notes
Certificate expiry on Rancher provisioned clusters
In Rancher 2.0 and 2.1, the auto generated certificates for Rancher provisioned clusters have 1 year of expiry. It means if you created a Rancher provisioned cluster about 1 year ago, you need to rotate the certificates, otherwise the cluster will go into a bad state when the certificate expires. In Rancher 2.2.x, the rotation can be performed from Rancher UI, more details are here.
Additional Steps Required for Air Gap Installations and Upgrades
In v2.2.0, we’ve introduced a “system catalog” for managing micro-services that Rancher deploys for certain features such as Global DNS, Alerts, and Monitoring. These additional steps are documented as part of air gap installation instructions.
Known Major Issues
- Cluster alerting and logging can get stuck in Updating state after upgrading Rancher. Workaround steps are provided in the issue [21480]
- Certificate rotate for Rancher provisioned clusters will not work for the clusters which certificates had expired on Rancher versions v2.0.13 and earlier on 2.0.x release line, and 2.1.8 or earlier on 2.1.x release line. The issue won’t exist if the certificates expired on later versions of Rancher. Steps to workaround can be found in comments to [20381]
- Catalog app revisions are not visible to the regular user; as a result regular user is not able to rollback the app [20204]
- Global DNS entries are not properly updated when a node that was hosting an associated ingress becomes unavailable. A records to the unavailable hosts will remain on the ingress and in the DNS entry [#18932]
- If you have Rancher cluster with OpenStack cloud provider having LoadBalancer set, and the cluster was provisioned on version 2.2.3 or less, the upgrade to the Rancher version v2.2.4 and up will fail. Steps to mitigate can be found in the comment to [20699]
Versions
Images
- rancher/rancher:v2.2.5
- rancher/rancher-agent:v2.2.5
Tools
System Charts Branch - For air gap installs
- system charts branch -
release-v2.2
- This is the branch used to populate the catalog items required for tools such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror thesystem-charts
repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
Kubernetes
Upgrades and Rollbacks
Rancher supports both upgrade and rollback starting with v2.0.2. Please note the version you would like to upgrade or rollback to change the Rancher version.
Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.
If you are currently using the RKE add-on install method, see Migrating from a RKE add-on install for details on how to move to using a helm chart.
Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [#14136] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on update
, which will cause any pods in workloads from previous versions to re-create.
Note: When rolling back, we are expecting you to rollback to the state at the time of your upgrade. Any changes post upgrade would not be reflected. In the case of rolling back using a Rancher single-node install, you must specify the exact version you want to change the Rancher version to, rather than using the default :latest
tag.
Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. [#13582]