Strategy for rebalancing containers across

Given a situation where I am adding n new hosts to my rancher cattle setup, how should spread the load across my hosts so that all hosts have a relatively equal share of the load? I know about the “run container on all hosts” - that’s not what I want. We have a ton of super-small container-based web services only needing one or two instances. I have tried simply stopping the entire stack and starting it again, but that only starts it up on the host where it was stopped - so not taking into account the newly added compute resources at all.

2 Likes

just a fyi, I’ve discovered that scaling each service in a stack down to 0 and then back up it seems to take into account the newly provisioned host and starts containers on that.

We have a bunch of volume-backed containers, and for those we simply don’t have the option to scale up above 1 - so I’d really like some better way of rebalancing my container pool to ease the addition of compute resources in my cattle cluster.

2 Likes

This is a problem that’s existed forever with Rancher and Docker swarm for that matter. Rancher’s solution to this appears in 1.5 as a “host scaling” web-hook.

Pretty sure that’s not the solution I’m after. As far as I can see the “host scaling” webhook uses Docker Machine drivers to provision additional hosts. In my case the new host will already be configured by aws autoscaling or similar, but I need a way for Rancher to start moving stuff over to it to relieve my existing hosts.

There is an upstream Kubernetes feature/issue for this functionality (called rescheduler) at https://github.com/kubernetes/kubernetes/issues/12140 Currently the status of the rescheduler feature in upstream is still under design.

At full disclusoure, I work for Turbonomic and we have been working in the community to propose a solution for this feature. We published an opensource component for k8s, which enables the Turbonomic Platform to execute continous container/pod placement (motions) based on realtime and historical resource demand for stateless applications.

TAP integrates with the orchestrator that is already in Kubernetes, like the replication controller.
In the case of migration, Turbonomic is going to kill the pod and tell the replication controller where to start the new pod to maintain the desired state. This prevents any race condition, since it is the controller that executes any start/stop. TAP also provides application SLA driven autoscaling as well as scaling the k8s cluster itself based on workload demand.

We have published more information at https://turbonomic.com/kubernetes/ to enable this functionality leveraging the Turbonomic Platform for workload running on AWS. The same functionality can be enabled on any k8s deployment, including rancher’s.

Feel free to reach out to us, if we can help.

What I do is I mark the host as Inactive and then delete the container in that host; the container should be moved to the new host that has the most available resource.

1 Like