New Features (since v1.1.0-dev3)
- Machine Catalog [#4919, #4856] - Rancher now supports the ability for admins to control which DockerMachine drivers are available for machine deployments. Drivers can now be directly added individually to Rancher or submitted to the public catalog repo where it can be made available to all Rancher deployments for install and use.
- Kubernetes: Persistent Storage [#4444] - K8s on Rancher now supports persistent storage for EBS and GCE.
- Kubernetes: Private Registry [#4529] - K8s on Rancher now supports private registries.
- Kubernetes: Persistent Storage [#4896] - K8s on Rancher now supports the ability to upgrade k8s on each environment. Only new environments created after this release is capable of supporting future upgrades.
- Kubernetes: More ingress controller improvements [#4892] - K8s ingress controller on Rancher now has support for custom ports, TLS, and ability to scale.
Known Major Issues
- K8S environments will not work. [#5204]
- Workaround: Upgrade to v1.1.0-dev5, switch your k8s environment to cattle and then back to k8s for the containers in the system stack to be re-created.
Major Bug Fixes since v1.1.0-dev3
- Fixed an issue where a service upgrade can be stuck upgrading forever [#4240]
- Fixed an issue in rancher compose where the logs option does not exit even if there are no more available logs [#4001]
- Fixed an issue where DNS resolution for headless services on k8s would fail to resolve [#4388]
- Fixed an issue where file creation in the kubectl shell did not work [#4394]
- Fixed an issue where the Rancher LB would return a 503 upon a container restart [#4487]
- Fixed an issue where a space in a catalog name would break the catalog item display in the GUI [#4528]
- Fixed an issue with the Route53 catalog item where AWS TTL was set to null and caused the service to no longer function [#4671]
- Added a new “destroy first” strategy as a health check option for services to prevent a case where a stopped container for that service on the only host left to potentially reschedule a container would cause the container from being deployed [#4894]
- Fixed an issue where a “start-once” sidekick would show the service in “start-once” rather than “active” [#4748]