We have Rancher 2.6.9 running on an AWS EKS cluster v1.20 that due to an issue with the cluster, the EKS cluster itself can not be upgraded. AWS answer was create new cluster and move application to new cluster. Problem is the oldest k8s version we can create with EKS is a 1.23 EKS cluster.
This Rancher installation manages 1 additional EKS cluster with our applications on it.
Backup from the old Rancher and Restore into same 2.6.9 version of Rancher on the new cluster fails due to CRD errors associated with changes in v1.22 detailed in rancher documents: The problem is we are not able to build a cluster with the same kubernetes version and we are not able to upgrade the old cluster.
2. Restore from backup using a Restore custom resource
IMPORTANT:
Kubernetes v1.22, available as an experimental feature of v2.6.3, does not support restoring from backup files containing CRDs with the apiVersion apiextensions.k8s.io/v1beta1
. In v1.22, the default resourceSet
in the rancher-backup app is updated to collect only CRDs that use apiextensions.k8s.io/v1
. There are currently two ways to work around this issue:
- Update the default
resourceSet
to collect the CRDs with the apiVersion v1. - Update the default
resourceSet
and the client to use the new APIs internally, withapiextensions.k8s.io/v1
as the replacement.
NOTE
When making or restoring backups for v1.22, the Rancher version and the local cluster’s Kubernetes version should be the same. The Kubernetes version should be considered when restoring a backup since the supported apiVersion in the cluster and in the backup file could be different.
What is the best solution to this? Is there a clean way to detach the 1 managed EKS application cluster from the old Rancher and import it into the new Rancher installation without causing impact to the application cluster?