Rancher scheduler (1.2.0) unexpected behaviour

the changes in the scheduler in cattle is providing me with some unexpected behavior

scenario 1:

  • 1 app server, with a number of containers running on it
  • start a 2nd app server
  • delete a container (which will make rancher create another instance)

rancher creates it on the same server, even though there is a new empty app server. (it used to try and spread containers across servers, i would expect rancher 1.2 to try and spread the instances)

scenario 2 (increment on the first scenario, scheduling rules)

  • 1 app server, with a service called vote running (5 instances)
  • voting service has a scheduling rule

Key: io.rancher.scheduler.affinity:container_label_soft_ne Value: io.rancher.stack_service.name=voting/vote

  • start 2nd app server (+1 if it re-balanced at this point, but it did not do that before.)
  • delete instance(s) of the Vote service

rancher schedules the newly created instances on the same server. when scheduling the “new” (replacement) instance it does not take into account the scheduling rules.

this does not seem correct. I can raise these as github item(s),

Raised this ticket:

as i require the scheduler to try and 1) honour any rules and 2) distribute the load across my hosts