Aws spot instance

I would like to leverage aws spot instance to automatically create instances. I specified user data when creating the spot fleet with the following script:

sudo docker run -e CATTLE_HOST_LABELS=‘prod=true&worker1=true&worker2=true&worker4=true&worker5=true&web=true&worker7=true&worker6=true’ --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://rancher.xxxxx.xxx:8080/v1/scripts/D6D5876284500A53F892:1514678400000:dt9LCdNL8LCCWm3XkyrulTBEI

So far it works as expected and whenever the spot instance got killed, it will re-create another instance and rancher will automatically launch the container required (the stacks are already defined earlier). The problem is that it takes up to 5 minutes to download the stack, extract and start. During that time, the rancher load balancer running on a separate on-demaind instance will try to re-direct traffic to the new host before the stacks are up and running and I will get bad gateway error.

How do I prevent traffic redirection before the stacks are fully ready?

Did you set proper heatlchecks on the services? Rancher LB shouldn’t redirect traffic to non-healthy containers.

Thanks for the pointer, I noticed that the healthcheck container on the spot instance turned green, but my other stacks are still downloading. Because I have one on-demand instance which is always running and trying to start the spot instance later, both running the same stacks. As a result, I get error every other request I made to the load balancer before the spot instance is fully ready.