is there any kind of resource scheduling or restriction. For our work we have to sometimes launch a big numbers (300 on two hosts) of short living containers. If we submit the services via API it spins up 300 containers and the entire system comes to stop. Is there a kind of queuing mechanism that would pause services (or delays the start) when the hosts are full?
The 1.2 release will be introducing the addition of resource constraints for container scheduling (CPU, memory, etc), I would assume this will work for you but you’ll have to specify memory or CPU constraints on your services as you launch them.
Once the hosts are filled up, then the remaining services would simply fail to fail to schedule which may or may not be desirable but should work.