We are hitting a Rancher 1.2 change hard right now.
We have a couple of services with Ranchre LBs to distribute traffic within our stack. We deploy the same stack multiple times, with different container versions, for testing etc. Before 1.2 we could just create a load balancer with a dynamic port, and it worked beautifully.
Now, with Rancher 1.2+, we seem to have to hard-code the port. For us, that sucks.
Because, if we hard-code the port, then two load balancers can’t be on the same host, or we have to manually manage the published ports which I think is really really cumbersome. (Especially since we have a service discovery in place which does that all for us just nicely).
Now, is there a way to configure our LBs without a hard-coded public port? Before Rancher 1.2 that was pretty much easy, I hope the functionality just didn’t go away (because that would just suck so much for our use cases … sorry )
We tried removing the “source_port” config item from the LB config in rancher-compose.yaml
. This led to seemingly working stacks (all green), but the published port did completely refuse to accept connections.
Any help appreciated!
Cheers,
Axel.
–
Examples from our config:
---
# docker-compose.yaml, still version 1
lb:
image: rancher/lb-service-haproxy
ports:
# SHOULD be mapped dynamically to the host, right? at least
# with all other services this works.
- "80"
links:
# ... etc
---
# rancher-compose.yaml, version 2
services:
lb:
scale: 2
lb_config:
port_rules:
- target_port: 8000
source_port: 80 # removing leads to "connection refused" on the published port
service: cfgtool
path: /config/api
- target_port: 5000
source_port: 80
service: billapi
path: /bills/api
- target_port: 9000
source_port: 80
service: angularfe
path: /
nobody?
we could really need help here, better stil someone to point out a bug in our approach … . would be so cool if we would just doing it wrong.
We tried infusing a custom config now. For some reason this didnt work at all.
Help?
---
# rancher-compose.yaml
services:
lb:
scale: 2
lb_config:
config: |-
global
maxconn 100000
defaults
mode http
log global
no option dontlognull
option splice-auto
option http-keep-alive
option redispatch
retries 3
timeout http-request 5s
timeout queue 1m
timeout connect 5s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
# ... and so on ...
# compare to below. didnt get in.
---
# /etc/haproxy/haproxy.cfg
global
chroot /var/lib/haproxy
daemon
group haproxy
maxconn 4096
maxpipes 1024
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256: #......
ssl-default-bind-options no-sslv3 no-tlsv10
ssl-default-server-ciphers ECDHE-RSA-AES128-GCM-SHA256: #......
tune.ssl.default-dh-param 2048
user haproxy
defaults
errorfile 400 /etc/haproxy/errors/400.http
# etc...
errorfile 504 /etc/haproxy/errors/504.http
maxconn 4096
mode tcp
option forwardfor
option http-server-close
option redispatch
retries 3
timeout client 50000
# etc...
resolvers rancher
nameserver dnsmasq 169.254.169.250:53
listen default
bind *:42
Hi Axel,
as you have noticed, dynamic ports are not supported in the new v2 load balancer due to the changed configuration design. We are targeting to add support for dynamic port allocation post v1.5. There is a ticket for it where you can give your love/thumbs up: https://github.com/rancher/rancher/issues/7093
To answer your question on specifying a custom HAProxy config: The rancher-compose you posted has tabs in it, which is not allowed per yaml specification. Please see a working example of what you are trying to achieve below. Also note that the v2 load balancer does not evaluate the links:
option in the compose - the target services are configure under the port_rules
key.
docker-compose.yml
version: '2'
services:
test-lb:
image: rancher/lb-service-haproxy:v0.5.9
ports:
- 8080:8080/tcp
rancher-compose.yml
version: '2'
services:
test-lb:
scale: 1
lb_config:
config: |-
global
maxconn 100000
defaults
mode http
log global
no option dontlognull
option splice-auto
option http-keep-alive
option redispatch
retries 3
timeout http-request 5s
timeout queue 1m
timeout connect 5s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
port_rules:
- source_port: 8080
protocol: http
service: web/nginx-production
target_port: 80
health_check:
healthy_threshold: 2
response_timeout: 2000
port: 42
unhealthy_threshold: 3
interval: 2000
Thanks for the answer. I was fearing I did something completely wrong.
Also tabs would be surprising (that’s why I didnt think of it), but possible, I’ll check - thanks for noticing. I’ll update our build chain to check for this