Rancher Release - v1.6.15

Release v1.6.15

Versions

Supported Docker Versions

  • Docker 1.12.3-1.12.6
  • Docker 1.13.1
  • Docker 17.03-ce/ee
  • Docker 17.06-ce/ee
  • Docker 17.09-ce/ee
  • Docker 17.12-ce/ee

Note: Kubernetes 1.9/1.8 supports Docker 1.12.6, 1.13.1 and 17.03.2. Kubernetes 1.7 supports up to Docker 1.12.6

Kubernetes Versions

List of images required to launch Kubernetes template:

  • rancher/k8s:v1.9.4-rancher1-1
  • rancher/etcd:v2.3.7-13
  • rancher/kubectld:v0.8.6
  • rancher/etc-host-updater:v0.0.3
  • rancher/kubernetes-agent:v0.6.7
  • rancher/kubernetes-auth:v0.0.8
  • rancher/lb-service-rancher:v0.7.17
  • busybox

For the list of versions for the Kubernetes add-ons embedded in the Rancher Kubernetes images, please refer to the kubernetes-package repo for the specific images and versions.

Rancher Server Tags

Rancher server has 2 different tags. For each major release tag, we will provide documentation for the specific version.

  • rancher/server:latest tag will be our latest development builds. These builds will have been validated through our CI automation framework. These releases are not meant for deployment in production.
  • rancher/server:stable tag will be our latest stable release builds. This tag is the version that we recommend for production.

Please do not use releases with a rc{n} suffix. These rc builds are meant for the Rancher team to test builds.

Beta - v1.6.15 - rancher/server:latest

Stable - v1.6.14 - rancher/server:stable

Important - Kubernetes Security

WIth this release, we are addressing several kubernetes vulnerabilities:

  1. The vulnerability CVE-2017-1002101 allows containers to use a subpath volume mount with any volume types to access files outside of the volume. This means that if you are blocking container access to hostpath volumes with PodSecurityPolicy, an attacker with the ability to update or create pods can mount any hostpath using any other volume type.

  2. The vulnerability CVE-2017-1002102 allows containers using certain volume types - including secrets, config maps, projected volumes, or downward API volumes - to delete files outside of the volume. This means that if a container using one of these volume types is compromised, or if you allow untrusted users to create pods, an attacker could use that container to delete arbitrary files on the host.

  3. Rancher is securing the kubelet port 10250 by no longer allowing anonymous access and requiring a valid cert. This is the port that is used by the kubernetes api-manager-to-kubelet communication and keeping this exposed will allow anonymous access to your compute node. Upgrading to the latest kubernetes version will resolve this issue. You can also visit the Rancher Docs site for specific instructions on how to secure your kubernetes cluster without upgrading your environment if you have not already done so previously.

Rancher v1.6.15 ships with k8s v.1.9.4 that addresses these vulnerabilities. If you are on Rancher v1.6.14 (current stable version) or v1.6.13, you will also be prompted with an update to your existing k8s v1.8.5 to v1.8.9. We highly recommend you to upgrade as soon as possible.

Important - Upgrade

  • Users on a version prior to Rancher v1.5.0: We will automatically upgrade the network-services infrastructure stack as without this upgrade, your release will not work.

  • Users on a version prior to Rancher v1.6.0: If you make any changes to the default Rancher library setting for your catalogs and then roll back, you will need to reset the branch used for the default Rancher library under AdminSettingsCatalog. The current default branch is v1.6-release, but the old default branch is master.

  • Rollback Versions: We support rolling back to Rancher v1.6.14 from Rancher v1.6.15.

    • Steps to Rollback:
      1. In the upgraded version the AdminAdvanced Settings → API values, update the upgrade.manager value to all.
      2. “Upgrade” Rancher server but pointing to the older version of Rancher (v1.6.14). This should include backing up your database and launching Rancher to point to your current database.
      3. Once Rancher starts up again, all infrastructure stacks will automatically rollback to the applicable version in v1.6.14.
      4. After your setup is back to its original state, update the upgrade.manager value back to the original value that you had (either mandatory or none).

Note on Rollback: If you are rolling back and have authentication enabled using Active Directory, any new users/groups added to site access on the Access Control page after the upgrade will not be retained upon rolling back. Any users added before the upgrade will continue to remain. [#9850]

Important - Please read if you currently have authentication enabled using Active Directory with TLS enabled prior to upgrading to v1.6.10.

Starting with v1.6.8, Rancher has updated the Active Directory auth plugin and moved it into the new authentication framework. We have also further secured the AD+TLS option by ensuring that the hostname/IP of the AD server matches with the hostname/IP of the TLS certificate. Please see [#9459] for details.

Due to this new check, you should be aware that if the hostname/IP does not match your TLS certificate, you will be locked out of your Rancher server if you do not correct this prior to upgrading. To ensure you have no issues with the upgrade, please execute the following to verify your configuration is correct.

  • Verify the hostname/IP you used for your AD configuration. To do this, log into Rancher using a web browser as an admin and click AdminAccess Control. Note the server field to determine your configured hostname/IP for your AD server.
  • To verify your the configure hostname/IP for your TLS cert, you can execute the following command to determine the CN attribute:
    openssl s_client -showcerts -connect domain.example.com:443
    You should see something like:
    subject=/OU=Domain Control Validated/CN=domain.example.com
    Verify that the CN attribute matches with your configured server field from the above step.

If the fields match, you are good to go. Nothing else is required.

If the fields do not match, please execute the following steps to correct it.

  • Open a web browser and go to Rancher’s settings URL. This can be done by logging into Rancher as an admin and click APIKeys. You should see an Endpoint (v2-beta) field. Take the value of that field and append /settings. The final URL should look something like my.rancher.url:8080/v2-beta/settings. Launch this URL in your browser and you should see Rancher’s API browser.
  • Search for api.auth.ldap.server and click that setting to edit it. On the top right, you should be able to click an edit button. Change the value of that to match the hostname/IP of the value found in your cert as identified by the CN attribute and click Show RequestSend Request to persist the value into Rancher’s DB. The response should show your new value.

Once this is completed and the hostname/IP matches your certs’ CN attribute, you should have no issues with AD login after upgrading to 1.6.8.

Enhancements

  • Rancher NFS/EBS/Secrets volume drivers have been re-implemented using a similar design to how k8s flexvol have been implemented. [#8826]

Prior to 1.6.15, Rancher created two Storage Drivers (NFS and EBS) and Secrets. They were implemented as docker volume plugins, running as a container. Since the drivers themselves run as containers, the biggest problem arises when the docker daemon gets restarted. When the daemon starts up, the startup order of containers is not guaranteed. This means that if the docker volume plugins are not loaded prior to an app container using a volume that requires the driver, you will get into a chicken-and-egg scenario. The daemon will just get stuck trying to initialize a volume with no available driver.

To address this, we re-implemented the same storage drivers using similar design patterns to how k8s flexvol drivers are implemented. These flexvol drivers are then responsible for interacting with the storage system and ultimately create a simple host bind mount for the container that’s using the volume. With host bind mounts, docker will no longer require docker volume plugins to initialize those volumes. In short, since they are no longer docker volume drivers, the daemon will happily restart itself without waiting for a containerized driver that isn’t available.

Other reasons for not continuing with using docker volume plugins are to avoid issues like this.

The takeaways are the following:

  1. This only affects cattle as our k8s environments do not use this.
  2. You do not have to upgrade the drivers, but until then, all volumes will still be created using the pre-1.6.15 volumes. Upon the driver upgrade, all new volumes will be created using the new flexvol.
  3. Existing services will continue to use the old driver until you follow the instructions below to upgrade them.
  4. This only affects our NFS, EBS, Secrets, and Vault integration because they all interact with volumes.
  5. Due to the fact the flexvol drivers now create host bind mounts, a docker inspect <container> will show a host bind mount vs a volume. docker volume ls will also now no longer show the volumes.

To upgrade using the new drivers, please do the following:

  1. Upgrade your desired storage driver, e.g. Rancher NFS, Rancher EBS, Rancher Secrets, Rancher Valut.

  2. Upgrade all services in the environment that are using the storage driver. Without upgrading the services, they will continue to use the docker volume.

  3. On every host, clean up the unused and dangling Docker volumes using the command below. After running the volume removal command, you will need to restart Docker in order to clean up these unused and dangling Docker volumes.

     $ docker volume rm `docker volume ls -q -f dangling=true`
    
  • Secrets and Vault Secrets Bridge is now GA [#10500] - In order to use the GA version of these stacks, you must upgrade the infrastructure stacks as well as follow the directions of how to upgrade from using Docker volumes to using the new Flex Volumes, which is listed above.

  • Improvements to Existing Experimental Windows Environments [#10661]-

    • Allow ability to upgrade Rancher Agent
    • Enable ability to create webhooks
    • CLI support
  • Kubernetes 1.9 Support [#10516]

  • Docker Machine v0.14.0 [#12008] - Updated to use Docker Machine v0.14.0, which includes support for additional azure regions [#10943], additional AWS zones [#11376] and AWS instance types (C5 and M5).

  • HAProxy 1.8.4 - Upgraded the Rancher load balancer to start using HAProxy 1.8.4, which address some bugs in HAproxy. [#10870, #12064]

  • Rancher CLI

    • Added ability to upgrade to a catalog template through rancher catalog command [#12048]
    • Fixed a CLI issue where Docker Compose files ending in .yml.tpl would launch empty stacks [#9295]

Infrastructure Service Updates

When upgrading infrastructure services, please make sure to upgrade in the recommended order.

  • Kubernetes 1.9.4 - v1.9.4-rancher1-1

    • New images: rancher/k8s:v1.9.4-rancher1-1, rancher/kubectld:v0.8.6, rancher/kubernetes-agent:v0.6.7
    • Switched the default add-on images (dashboard, dns, dashboard, etc) to be pulled from Rancher’s repo in Dockerhub instead of GCR. The location is still configurable, but the default pulls from Rancher instead of GCR. [#10323]
    • Fixed an issue with conntrack entries being deleted incorrectly for Service IP range [#10288]
    • Fixed an issue where add-ons might not be updated if using the same k8s image [#10971]
    • Secured kubelet port by no longer allowing anonymous requests [#12079]
    • Cleaned up some logging messages [#10366, #10365]
    • Fixed an issue where audit-logs were missing from the logs of the kube-api-server [#10631]
    • Fixed an issue where k8s certs were assuming k8s will be discovered using Rancher DNS [#10086]

    Note: If upgrading from a k8s version prior to k8s v1.6, then you will need to re-generate any remote kubeconfig due to RBAC support.

  • Network Services - v0.2.9

    • New images: rancher/metadata:v0.10.2, rancher/dns:v0.17.3, rancher/network-manager:v0.7.20
    • Fixed an issue where during startup of healthcheck, it is not able to reach server. [#10288]
    • Fixed an issue with deletiion of conntrack entries related to kubernetes cluster IP subnet [#10288]
  • IPSec - 0.2.3

    • New images: rancher/net:v0.13.11
    • Reduced the logging level for ipsec[#7571]
    • Fixing connectivity check issues when adding multiple hosts [#11372]
  • Healthcheck - v0.3.6

    • New images: rancher/healthcheck:v0.3.6
    • During service upgrade with healthcheck changes, use the healthcheck on the container. [#11487]
  • Rancher NFS - 0.5.0

    • New images: rancher/storage-nfs:v0.9.1
    • Switched to stop using docker volumes and uses flex volumes [#8826]
  • Rancher EBS - 0.5.0

    • New images: rancher/storage-ebs:v0.9.1
    • Switched to stop using docker volumes and uses flex volumes [#8826]
  • Rancher Secrets Bridge - v0.3.2

    • New images: rancher/secrets-bridge-v2:v0.3.2, rancher/storage-secrets-bridge-v2:v0.9.6
    • Switched to stop using docker volumes and uses flex volumes [#8826]
  • Rancher Secrets - 0.0.3

    • New images: rancher/storage-secrets:v0.9.1
    • Switched to stop using docker volumes and uses flex volumes [#8826]
  • Windows Network Services - v0.2.0

    • Add Parameter --never-recurse-to=169.254.169.251 to DNS service
  • Network Policy Manager - 0.0.3

    • New images: rancher/network-policy-manager:v0.2.7
    • Small bug fixes
  • ECR Credentials Updater for Windows - v3.0.0

    • New images: rancher/rancher-ecr-credentials-windows:v3.0.0
    • Introduced a ECR updater for Windows environments [#12018]
  • Container Crontab - v0.5.0

    • New images: rancher/container-crontab:v0.5.0
    • Use service UUID for metadata lookups to allow renaming of stacks [#11398]
    • Handle case sensitivities more gracefully when comparing stack service names [ [#12001]
    • Return sidekicks names without primary name [#10908]

Known Major Issues

Major Bug Fixes since v1.6.14

  • Fixed an issue where storage pools from storage drivers with a local scope were not cleaned up when hosts were removed [#8346]
  • Fixed an issue where the agent might panic if the websocket connection was dropped [#8848]
  • Fixed an issue where rancher secrets would create too many Docker volumes and not get cleaned up by switching to flex volumes for storage drivers [#8984]
  • Fixed an issue where there was no way to skip the DNS check that was added into the agent [#9562]
  • Fixed an issue where logs of the kubernetes-agent wouldn’t load correctly in the UI [#10009]
  • Fixed an issue where the ciphers in HAProxy were more relaxed than some other ciphers [#10207]
  • Fixed an issue where OpenLDAP was unable to be configured with Oracle DSEE server [#10292]
  • Fixed an issue where the read only role had access to create/update/delete webhooks [#10453]
  • Fixed an issue where labels for hosts were not being propagated to k8s nodes when using EC2 hosts [#10506]
  • Fixed an issue where catalog templates were not error-ing out when 2 primary services were referencing the same sidekick [#10563]
  • Fixed an issue where webhooks weren’t working with Aliyun Dockerhub and Azure Container Registry [ #10654, #10639]
  • Fixed an issue where rancher/agent wasn’t working with Docker 17.12 on boot2docker [#10970]
  • Fixed an issue where the Rancher agent wouldn’t work with nvidia runtime [#11467]

Rancher CLI Downloads

Rancher-Compose Downloads