Our cluster is on-premise. rancher2 is running alone in a VM. The kubernetes nodes are physical.
If I create a kubernetes cluster using rke and import it into rancher2, then I can re-run rke to add/remove/update the kubernetes cluster.
If I create a kubernetes cluster using the rancher2 UI to generate a docker command to run on each kubernetes node, then how do I remove/update the kubernetes cluster?
Any warnings about using either of the above methods to create the kubernetes cluster?
Either way is valid, and pretty much end in the same end-product (a containerized Kubernetes cluster). I prefer to use the UI -> Custom Cluster -> docker command to create workload clusters on-prem. Mainly for simplicity sake, and haven’t had any issues. You can easily update the kubernetes version right in the GUI (Edit Cluster button). If you later want to blow away the cluster, there is a page in the docs that gives the steps. Looks like a lot of work at first glance, but you can copy/paste most of it. I’ve done it many many times in lab, and it works and returns “clean” nodes.
In a lab or a test cluster using the UI is ok. But for Prod or any environment where you want a predictable and repeatable process, automation is the only way to go IMHO. Pick you CI/CD tool of choice, create a release pipeline, make sure its idempotent, and from that pipeline call the tooling you need to get the job done. It doesn’t matter which tools you use so long as you and your team know how to use them, so vanilla kubectl and a bit of bash and curl is fine, as is rke, Terraform, Puppet, Ansible or anything else that gets you there. Be consistent, create some common repeatable patterns, create a run-book so you don’t end up with a SPOF (you do want to be able to do other things in your career other than this right ?) and you’ll be all set to focus on the things that really matter (hint, it isn’t building and maintaining K8s clusters).