How to change the IP, FQDN and certificates of an existing Rancher Server

Introduction

We have a Rancher Server and a Kubernetes Cluster, that was deployed with the Rancher Server with the “Custom Cluster” option.

We recently had to migrate our Rancher Server to a new network and therefore the Rancher Server would get a new IP, a new FQDN and new certificates.
The Kubernetes Cluster however remains in the same network and will be managed by the same Rancher Server as before, except that the Rancher Server has a new IP, new FQDN and new certificates.

We managed to engineer a migration path and as we were able to migrate our Rancher Server successfully, we want to share our documentation with the Rancher Community.
However, it is not an officially supported procedure and please use it at your own risk but it worked fine for us.

How to

Comment

The Rancher Forum allows only two links in a blog post.
Therefore we will have to write rancher-example-com instead of rancher.example.com and rancher-cloud-example-com instead to rancher.cloud.example.com

Setup

The Rancher Server

  • Single Node Install
  • OS: Ubuntu 16.04
  • Rancher version: rancher/rancher:2.1.6
  • Single Node Install

The Kubernetes Cluster

  • Install method: Custom Cluster
  • Size: 3 etcd, 2 controlplane, 3 worker
  • Kubernetes version: 1.11.x

Before

Kubernetes Cluster A ---> rancher.example.com (10.0.0.10, Certs signed by CA 1)

After

Kubernetes Cluster A ---> rancher.cloud.example.com (192.168.0.10, Certs signed by CA 2)

Prerequisites

The Kubernetes Cluster Nodes must be able to acces the new VM rancher-cloud-example-com via HTTPS (TCP 443).

Steps to change IP, FQDN and certificates of an existing Rancher Server

  1. Deploy a new Ubuntu VM rancher-cloud.example-com
  2. Stop Rancher Container on rancher-example-com
  3. Copy the data from /var/lib/rancher from the Rancher Container on rancher-example-com to the new VM rancher-cloud-example-com.
  4. Start Rancher Container on rancher-example-com again
  5. Delete the cattle-cluster-agent Deployment on the Kubernetes Cluster System Project
  6. Delete the cattle-node-agent DaemonSet on the Kubernetes Cluster System Project
  7. Stop the Rancher Container on rancher-example-com for the last time
  8. Start the Rancher Container on rancher-cloud-example-com with the /var/lib/rancher data from rancher-example-com and with the new certificates from the CA 2.
  9. Change the server-url in the Rancher UI Settings Menu to the new URL
  10. Execute the “Node Run Command” with the new server-url and CA hash again on each Kubernetes Cluster Node
  11. Check the logs fo the Rancher Agent
  12. Upgrade the Rancher Server to rancher/rancher:v2.1.7 to redeploy the cattle-cluster-agent Deployment and cattle-node-agent DaemonSet on the Kubernetes Cluster
  13. Upgrade the Kubernetes Cluster to a newer version (1.11.x to 1.12.x)
  14. The migration was successful if the Rancher upgrade and the Kubernetes Cluster upgrade were successful.
  15. Done

Testing

Biggest thing is to verify you can still make changes to your downstream clusters without errors.

Like:

  • Upgrade Rancher
  • Upgrade the Kubernetes Cluster to a newer version (1.11.x to 1.12.x)
  • Create Projects, add Users, manage deployments

Feedback from Rancher Supoort

Rancher Labs is also working on a official feature for this use case:

Authors

@dmlabs
@linuxbuddy

1 Like