Is Rancher 2.0 good for us?

I would like to adopt rancher, but I am kind of worried about how it will work etc…, is there any case studies I can review? and how we can get some sort of training?

We are specialized in eCommerce Development and Digital Marketing.

Hi,

The decision to adopt rancher is probably quite manyfold. There is a lot of material to be found on the rancher web site. Basically, Rancher 2.x is a vendor-agnostic management tool for kubernetes (aka k8s) clusters. k8s itself is a container management tool, which means it offers means to manage container based workloads (web applications etc.) on one to many nodes. If you can affirm that your workloads can run in containers (this is the most important prerequisite), k8s is a very good choice regarding stability and availability of the systems that run on it. However, its learning curve is somewhat steep. This is what Rancher 2.x makes a lot easier, because it allows to start working with it using an intuitive GUI, but as it allows accessing the intrinsics or k8s you will most likely never get stuck with it.

There is lots of training material around - some to be found also on the Rancher web site or by searching the internet. One particular recommendation is the material to the Rancher 2.0 workshop I have hosted at the BaselOne 2018 conference this fall, which you can find in this github repo: https://github.com/Remigius2011/rancher-workshop . It covers the basics of installing Rancher, creating a cluster and running some simple stateless (i.e. without persistent data) and stateful (in this case a postgresql database) workloads. I try to keep it updated, but of course I cannot guarantee its correctness nor that it fits your particular use case, but if you work through it, you will most likely get a taste of what you can achieve with it.

I hope you will find Rancher 2.x useful and fun to work with - which is the case for me.

Cheers, Remi.

To start, you can simply use Ranchers RKE tool to quickly create an HA K8S cluster on VM’s or Bare Metal servers. After that you can decide whether you want to install Rancher onto your new cluster or not. It is two seperate steps.

Thank you for your detailed response, if you do not mind me asking what is the possibility of running rancher on a local network and connecting it to public network?

Also is it necessary to have rancher online all the time? or we can turn off the local VM?

Rancher run on-line introductory training sessions every month which are also recorded and can be found on YouTube. There are also regular sessions on other popular topics often with guest speakers. That’s a good place to start. The docs, and especially the installation and quick start guides are very comprehensive so are another good source. If you are familiar with infrastructure provisioning using tools such as Terraform, it’s pretty straightforward to create your set of nodes for the management cluster and add other clusters for your workloads. I agree with @yeti that using RKE to set up your HA management cluster is a good way to go. When you add your workload clusters you do that in a similar approach to version 1 using the rancher agent which takes care of all the heavy lifting. The QuickStart guides cover that in enoug details to make it easy enough to customise to your particular set up. Obviously it’s good to do all of that as a repeatable CI pipeline where you will need access to the key toolchain, such as Terraform, RKE, helm, kubectl, the CLI for you platform, possibly a secrets solution (although in some cases you CI platform will provide that - it usually depends if you are in a corporate setting or not) and if you want, the Rancher CLI too. With a lot of focus these days on hosted platforms (EKS, AKS, et al) and serverless, you may be lucky enough to be in a position to skip all the infrastructure work and if you can, more power to you :slight_smile:

I have a fixed list of questions as I am not too sure about it.

Now, what happens to the data we added to the Rancher if we upgrade it using docker? should not be there persistent data? if so where it will be?

When we point a domain to setup ingresses, where the domain should be pointed? to the rancher server or the etcd/control plane server or the workers?

As for the persistent data for specific workloads, how can that be created? and how it works in general?

Hi,

just a few quick answers.

In general, you should be prepared to do more own research, this will most likely give you answers faster than when you ask questions (in general means not only regarding rancher).

Rancher generated / controlled clusters continue to run even while rancher is stopped. When running Rancher on a VM (which has advantages - e.g. ability to create snapshots before upgrades), it is advisable to shutdown the VM instead of just pausing it, because the restart will result in a proper reconnection of Rancher and the controlled clusters.

There is persistent data for Rancher, the upgrade procedure is well documented in the rancher docs. It depends on the type of installation (which is also well documented in the same place).

The clusters are mainly operating on their own, ingresses belong to the cluster, rancher only performs control operations on the clusters, it does not route traffic to or on behalf of them.

Handling persistent data is necessary for so-called “stateful workloads” (which is just a name for workloads with persistent data - e.g. to be used as a search term in search enginges). The nature of stateful workloads is very individual, there is no recipe to follow that works identically for all such workloads. Typically, you provide some sort of persistent storage in kubernetes (via rancher) and assign it to the workload definition, which mounts it into the file system of the resulting pod (loosely speaking the docker container - similar to a docker volume).