Question about graceful scaling down containers & HAProxy

Hi,
I have some containers deployed on many hosts for high availability. I also have a consul backend and registrator on each host that registers my containers.

Then, consul-template updates my HAProxy configuration that loadbalances incoming trafic on each container. Fine.

My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error. Is there a way to deregister the shuting down container before stoping it in order to remove it from the scope of HAProxy ?

Is there some best practices to do that?

Thank you.

The functionality you are looking for is called quiesce. This process drains existing connections and moves them to any other available container in the given pool, once that is complete, the container is marked “down” and no new connections are sent, then you can shut down the container (or it auto shuts down based on your scale down rules).

I know how this works in other LB’s like - F5, Citrix NetScaler, A10, but I have no idea how this works, or if it is even available in HAProxy. This will be good to look into.

Phillip

Am I the only one to have this kind of problem ? I’am using the classical “registrator, consul, reverse proxy” pattern and I think that people like me experienced this same problem.
Do you manage this client side (with retry?) or is there another solution ?

@Phillip_Ulberg : I did not found anything about a “quiesce” like thing in HAProxy.

Thank you

workaround might be more retries on haproxy side with low connect timeout

haproxy.cfg

defaults
        retries 5
        timeout connect 100

@Tomas_Holcman : this will not work because retrying in HAProxy only concerns queries that are still in HAProxy. As soon as they are sent to the backend, if this backend crashes, these queries aren’t retried. Willy Tarreau (HAProxy author) explains that retrying in case of failure is a bad idea regarding idempotent consideration of stateless http queries.

Thank you anyway :wink: and if anyone have an idea …