I’m running a java app behind a rancher-load balancer and am seeing disturbing behavior on when I scale-down the application.
(1.2.0-pre2, 4 compute nodes)
docker-compose:
myapp:
labels:
io.rancher.container.pull_image: always
image: myapp:poc2
myapp-lb:
ports:
- 8075:8080
labels:
io.rancher.scheduler.global: 'true'
tty: true
image: rancher/load-balancer-service
links:
- myapp:myapp
stdin_open: true
rancher-compose:
myapp:
scale: 5
health_check:
port: 8080
interval: 2000
recreate_on_quorum_strategy_config:
quorum: 2
initializing_timeout: 60000
unhealthy_threshold: 3
strategy: recreateOnQuorum
request_line: GET "/myapp/monitor/verify" "HTTP/1.0"
healthy_threshold: 2
response_timeout: 2000
reinitializing_timeout: 60000
myapp-lb:
load_balancer_config:
haproxy_config: {}
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
When I decrement the scale of myapp from 5 to 4, a jmeter script I have running shows:
Throughput:
Response time jumps up during that period too.
JMeter (10 threads, also seen w/100) is hitting a loadbalancer that spreads the requests across all of the myapp-lb instances
This is a pretty significant interruption on a single instance scaledown. Am I missing something about how to orchestrate this?