Noop health check, not completely a no-op


We recently turned on some health checks for our ELK cluster, but didn’t want rancher to reschedule, etc so turned on noop (we are doing this so that we can gracefully migrate a service once it comes back health).

However, we noticed that the “degraded” services were still being removed from load balancers pointing at that service… so there was an “op” being taken still.

Is there a way to turn that off? I believe since the healthcheck is in a specific backend, we can’t overwrite with global of default overrides.

The “take no action” refers to the container being (not) rescheduled… If you don’t want it to be used for containerr replacement OR for load balancer targeting, why have a healthcheck at all?

The idea for us is that it would prevent you from doing an upgrade on a yellow cluster and provide visual feedback and also allow a migration script of ours to wait until an upgrade was finished and healthy before moving on.