[Solved] Connect multiple domains to workload pods

I’m a developer picking up Rancher and learning the ropes. I am new to this so if this is considered an anti-pattern or the wrong approach please let me know! I have been struggling with this for a couple of days and really could use any help.

  1. I own two domains from Name-cheap.
  2. Both DNS’s are pointing to Cloud-front.
  3. Cloud-front is pointing to my Digital Ocean Server IP. (Where Rancher lives)
  4. I create two workloads in Rancher. Each being a separate Nginx server.

The workloads give my server ip + a random port so I can access the webpage.
i.e http://xxx.xxx.xxx.xxx:31099/ and http://xxx.xxx.xxx.xxx:23466/

How can I point my domains to their respective Workload pod?

Example
Domain1 --> xxx.xxx.xxx.xxx:31099/
Domain2 --> xxx.xxx.xxx.xxx:23466/

Use an ingress - aka Load Balancer.

I’ve added the load balancer to my cluster but i haven’t been successful connecting the host name to the workload pod. If I go to my domain it will take me to my server IP which ends up trying to login to my Rancher server instead of loading my workload pod.

I’m trying to run a plain Nginx server. I have the option to forward ports in my workload however I have 4 different options of how I want to forward it. Do you know what settings I should set? In the load balancer I have another option to forward my port.

I apologize for all the questions. The documentation for 2.0 goes into detail of how things works instead of instructions of what things do.

Thanks @etlweather

Thank you for responding! I thought I should give an update on how I got it working. I’m not too familiar with Kubernetes so please correct me if im wrong.

My mistake was I was trying to run my Rancher Server on the same server as my cluster. In order to connect to my rancher server I exposed port 80 so I could access it via my domain. Because I was already using port 80 all of my domains that was pointing to what I thought was my load balancer but was actually just loading my rancher server. At the time I didn’t see the issue but now it’s so obvious. So my original issue was my domains were loading rancher server and not my pods

My setup is this.

  1. I have one digital ocean droplet running just rancher.
  2. When I create a new cluster in Rancher, I go to the Digital ocean option (added the API in on my DO account), and it will spin up a DO droplet in which Kubernetes is installed.
  3. I add my workloads (pods/images) and then add my load balancer. I used nginx for the image and exposed port 80 for the container/listening port. I also used NodePort. For my load balancer, I added all my domains. So in my case I had 3 rules which pointed my hosts to their containers
  4. I point my domains in Cloudflare to the IP of my new server that Rancher spun up. Not my rancher server IP.

If you are adding a new cluster in Rancher and use DO, you must use the IP of the server while adding the API key.

I fully understood that, but what if I have a cluster with more than etcd/control plane?

Like Rancher will be the brain, and etcd/control plane will be the heart and the rest of the body will be the workers etc…

So what if I have more than one heart?

A cluster with multiple etcd nodes replicating the same data and multiple control plane nodes talking to it is just a highly-available/fault-tolerant cluster.

A cluster with multiple unrelated etcd databases is not a thing, that’s multiple unrelated clusters. You can manage multiple clusters from one Rancher installation.

@vincent What I meant is a domain pointed to namespace that is running on 10 workers, and controlled by multiple etcd, where the domain should be pointed exactly? at what etcd node?

Etcd is the database and irrelevant to what you’re asking. It doesn’t run your workloads, they run on nodes with the worker role (but they may happen to coincide with ones with the etcd role).

Where you point DNS records depends on the kind of workload and how it’s exposed. It could be a host port (whatever nodes you create scheduling roles to point it to), node port (all worker nodes), ingress (nodes running the ingress controller, which is all workers by default) loadbalancer (the IP or name assigned to it by the provider), etc.

Projects like external-dns can manage keeping a DNS record up to date with the appropriate target IPs automatically. Or for on-premise nodes MetalLB can create local load balancers and float an IP between then to keep it available.