Load balancer or service link?

Both load balancer and service link can achieve high avaliability of service.
Load balancer make it by a proxy, while service link by DNS.

Because this DNS is only belong to Rancher itself, public service can only use load balancer to distribute requests.

But to internal service, if service link is the same with load balancer ?
Can service link be used for high avaliability ?

Whether the request may not be distributed balancedly because of DNS cache ?

For example:

Service backend, with 3 containers, backend_1, backend_2, backend_3

Service nginx, make a proxy_pass to backend.

Would it always pass to backend_1, because once it DNS to backend_1 and it will always use backend_1 ?

In other word, can I use service link instead of load balancer for internal service ?

Depending on the client behavior, the answer is somewhere between “does not balance properly at all” and “works ok as a simple way to coarsely balance requests”.

The DNS server will return an answer with all of the IPs of the (healthy) backend containers in random order each time it asked, with a TTL of 1 second.

But the client is free to do whatever it likes with this information. Some will take just the first answer, others pick randomly from the list, or not so randomly. Some will respect the TTL, others will cache the value for some amount of time they decide, or forever, or until a request fails, etc.

Nginx specifically will resolve names exactly once on startup by default. To make it do anything else you have to give it a resolver. The Rancher DNS service is always 169.254.169.250 on any host.

1 Like

Thanks vincent.
Very detailed and clear.

And in addition, as the nginx example.
If not give it a resolver, assum that it resolve names as backend_1.
Once backend_1 is down, it would never switch to backend_2 or backend_3 automatically.

So because of

Depending on the client behavior …

Can I think that it isn’t properly for high availability if only use service link?