How do I get started with Rancher2

Hi!

I have a few questions regarding Rancher 2. I successfully set up the HA-Cluster and Lunched the Hello World Container. So it all seems to work. So here are my questions:

How would a functional LB setup look like? In Rancher 1 I created a huge global LB that got all *.my.domain requests and then forwarded them to the stacks. This LB also terminated SSL.

Is this the way to go in Rancher 2? How would I request Lets Encrypt Certificates for the domains?

How should I set up the DNS? Just A records to the Rancher-Nodes and let them handle it from there?

I hope you guys can give me some insights in your ops with rancher

Regards

Max

  1. Generally you create at least an ingress rule per service. Load balancer is already in place using the default nginx ingress. (You can select which SSL cert to use per ingress rule)

  2. You need to use the letsencrypt certmanager or manually create a letsencrypt ClusterIssuer and then create a certificate using that ClusterIssuer. (the certmanager also creates a ClusterIssuer). Keep in mind that, as far as I have seen, you need to use kubectl to create the actual certificate as this does not seem to be fully supported with the certmanager in the UI.

  3. Yes

Hi Liam,

thanks for your reply. I did some reading on the topic last night and ended up at metalLB.
As far as I understood LoadBalancing is done by AWS or any other Kubernetes Provider. Thats the catch, I don’t use a Kubernetes Provider but just some Virtual Servers.

So to have everything working I have to set up a VPN, connect all Servers, let it be manage by metalLB which is then the gateway between rancher and the external world.

Or I use NodePort which exposed an IP and a Port to the outside and do alle the LB with e.g. HAProxy or Nginx but have to configure it by myself.

This was a much easier on Rancher 1…

1 Like

It’s not really all that different or more difficult with v2. You will likely set up one or more external LBs to distribute traffic to your cluster nodes, registering those nodes as alias targets to your DNS solution (in my case AWS Route 53). You can then set up an ingress controller to route requests based either on host headers or resource paths to associated services. The rules for those are configured in a very similar way as you did in v1. There are other options for load balancing of course. Depending on how you created you cluster you may get the ingress controller OOTB (you do if you use RKE). If you create the cluster another way, the Nginx ingress controller is available in the catalogue with some sane defaults, so it’s easy to DYI to. Certs (including wildcard) can be added where you need them, that is well covered in the install docs.

Hi Fraser,

I guess the point at which I am stock is where you can get Inside the Rancher environment without much hazel. But perhaps the Problem is just in my Head:

When I set up the LB for the 3 Rancher Nodes and point the DNS to it, then I can only connect to the Backend via the LB.

I would like to use the same LB do connect to all other Pods created on the 3 Nodes.
Previously I had example.com:8080 and I created a Container and told the LB to forward pod.example.com to the Container. This worked for every *.example.com/*

If I connect to rancher.example.com now, I get the backend - which is fine. But how would I create a Ingress Controller that is reachable on example.com and then distribute all of the traffic to the pods. It doesn’t have to be one, I would be fine with a Ingress Controller for each service but I want to point on DNS Entry to an Ingress Controller and never touch the DNS Entries again or any other thing outside of Rancher.

Somehow I fail to see how to do this. And this makes the switch impossible for me. I hope you can give me some insights on this.