Currently i am running rancher as a test setup with 3 hosts. i am using rancher loadbalancers to route the traffic to containers that run on multiple hosts.
I got a dns record that points me to the ip of host 1 this works as expected but when host 1 goes down everything is unreachable.
What can i use to put in front of my hosts so that the traffic will go to a host that is up?
Have you tried using the Cloudflare DNS or Route53 DNS catalog options?
They’ll dynamically generate DNS entries for each service with a publicly exposed port and associate all ip’s for a service to the same DNS entry.
Then the only manual part is making friendly CNAMEs for any services you want to make easily accessible.
The other common way aside from DNS is to use something like AWS ELB in front of all 3 hosts, but dynamic DNS is much more the way to go IMO - you shouldn’t have to care which host a lb container ends up on as your infrastructure changes and grows.
This seems to be what im looking for. Played with it for a couple of minutes and so far it looks good. Only one thing can i let the cloudflare DNS generate DNS entries without having publicly exposed ports on a container? i use the cloudflare loadbalancer without a public port on a container.
DNS will only point to the ip - it won’t point to a port.
http (port 80) or https (port 443) need to be exposed for http/https access, but DNS will only ever read servicename.stackname.environmentname.yourmasterdomain.com regardless of which port(s) are exposed
Yes i know that. But rancher cloudflare only registers containers that have a public port. I don’t setup public ports on a container because this isn’t necessary because i have the load balancer in front of it.
I would prefer it would also automatically create a servicename.stackname.environmentname A record for containers without a public port.
Ah no, it doesn’t do that it’s only for managing external/public DNS. Sorry if I misunderstood your question.
For internal (container-to-container) DNS access, link a service e.g. link service-a to service-b and service-a can call http://service-b without service-b ever exposing a port publicly.
That link could be an internal (no exposed ports) load balancer or a single container.
You can also rename/alias the service upon linking, so you can link api-lb as api
If they’re in the same stack, it technically works without linking as well, but linking guarantees that Rancher will spin up supporting services in the right order should you ever copy/paste a stack to a new environment.