The context:
I’m using an Haproxy container to serve various small websites. I tried the Rancher Load Balancer: http://rancher.com/virtual-host-routing-using-rancher-load-balancer/ and it worked perfectly but most of my websites need password protection (basic access authentication). And currently it’s not possible to set it up at the Rancher Load Balancer level. That’s why I’m keeping my Haproxy service that I was already using with Rancher 0.28.0
The containers are configured to use the Managed Network.
The Haproxy service is linked to the different websites. So in the haproxy.cfg
you have things like that (where myapp1 and myapp2 are the links names):
backend myapp1
balance roundrobin
option httpclose
option forwardfor
server s1 myapp1
backend myapp2
acl auth_ok http_auth_group(webaccess) myapp2
http-request auth if !auth_ok
balance roundrobin
option httpclose
option forwardfor
server s1 myapp2
My environment:
Rancher v0.37.0
Cattle v0.91.0
UI v0.47.0
Docker 1.8.2
The issue:
It seems that Haproxy is starting too fast, before the links are resolvable, since the container fails with this type of errors:
[ALERT] 258/132043 (7) : parsing [/usr/local/etc/haproxy/haproxy.cfg:110] : 'server s1' : invalid address: ‘myapp1' in ‘myapp1’
[ALERT] 258/132043 (7) : parsing [/usr/local/etc/haproxy/haproxy.cfg:110] : 'server s1' : invalid address: ‘myapp2' in ‘myapp2'
[ALERT] 258/132043 (7) : Error(s) found in configuration file : /usr/local/etc/haproxy/haproxy.cfg
[ALERT] 258/132043 (7) : Fatal errors found in configuration.
Temporary solution:
I could make it work by creating my own haproxy image with this Dockerfile:
FROM haproxy:1.5
CMD sleep 5 && haproxy -f /usr/local/etc/haproxy/haproxy.cfg
I’m just overwriting the command. By sleeping 5 seconds, it gives time to the network to be set (I suppose…) and then Haproxy starts with success as it can resolve the hostnames (the links).
This solution is not ideal and I wanted your view on it… Cheers
Additional notes:
I didn’t have this issue with Rancher 0.28.0 but a lot of changes must have been made since then.
The full story is that I tried to upgrade Rancher 0.28.0 because the disk was filling (rancher/server was getting a really big ibdata1
file) but it didn’t work. Rancher wouldn’t start on port 8080. I didn’t see any errors in the logs except the fact that the service on 8080 wasn’t accessible. Anyway, a lot of changes in MySQL must have been made between the 2 versions… I wasn’t so surprised of the failure.