Load Balancer config

After I’ve created a load balancer, I don’t see the targets and ports etc in the UI for it (just the containers).

Also, when I click edit, I can add new targets, but I would expect to see the current targets as well (in order to edit/remove them).

I have an underlying issue with mapping service ports etc, but these issues make me unsure of where the real issue lies (i.e. it seems like the target I added when creating the load balancer actually is used, but I can’t see it).

Which may also be related to this issue: Rancher agent behind proxy

Can you provide what version of Rancher you are using? Load balancers have been going through a lot of enhancements over the last couple of releases, but I have no issues with creating a load balancer and seeing the targets when I edit.

There may be some issues related to your proxy as the NO_PROXY must be set up per the comment in order for communication between the server and agents to occur.

Yes sorry, forgot to mention that, it’s on rancher v0.30.0.

But as you say, it’s most likely a config issue on my end, and I’m hopeful that setting NO_PROXY properly will resolve my issues :wink:

So, with NO_PROXY properly set, my targets in the load balancer shows up ok, but I still have an issue with the exposed ports not being reachable.

That is, there are no exposed/mapped ports when running docker inspect on the Lb Agent, as both Config.ExposedPorts as well as NetworkSettings.Ports are {} (and HostConfig.PortBindings is null).

However, in the UI for the Lb Agent, I see that it ought to expose port 80:

I also notice that on the detail page for the Lb Agent, the port mappings are there, but the IP address field is empty:

Not sure what else to look for at the moment… trying to talk to port 80 of Lb Agent on its container doesn’t work either (connection refused) on both the docker ip 172… and the rancher assigned ip 10.42…

Looking at /etc/haproxy/haproxy.cfg tough, it says on the last line: listen web 0.0.0.0:9, which to me is a bit surprising (but I’m not familiar with haproxy, so don’t know if this is a common haproxy thing, but it looks suspect to me). And curl’ing to the container at port 9 gives an empty reply (so it does listen on that port).

This is as far as I’ve got at the moment… hopefully you can spot something obvious I’m doing wrong :wink:

Here’s netstat and iptables for the Lb Agent:

root@d396d2c77f5b:/# netstat -tnap                                              
Active Internet connections (servers and established)                           
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
PID/Program name                                                                
tcp        0      0 127.0.0.1:2812          0.0.0.0:*               LISTEN      
454/monit                                                                       
tcp        0      0 0.0.0.0:9               0.0.0.0:*               LISTEN      
460/haproxy                                                                     
root@d396d2c77f5b:/# iptables -L                                                
Chain INPUT (policy ACCEPT)                                                     
target     prot opt source               destination                            
                                                                                
Chain FORWARD (policy ACCEPT)                                                   
target     prot opt source               destination                            
                                                                                
Chain OUTPUT (policy ACCEPT)                                                    
target     prot opt source               destination                            
root@d396d2c77f5b:/#                                                            

in case that’s helpful…

@kaos all LB published ports should get registered in CATTLE_PREROUTING chain in host iptables. Example:

ADDRTYPE match dst-type LOCAL tcp dpt:90 to:10.42.243.132:90

Could you share the output of iptables-save taken on the host your lb agent resides?

Also in 0.30 we’ve added support for internal listening ports for the LB. So you get a choice whether you want an LB listening port to be published on the host, or remain internal. Public option is the default one, but just wanted to make sure you haven’t chosen “Internal” option - I know its unlinkely :slight_smile: - in the UI

P.S. If you login inside lb agent, you won’t see the forwarding chain there when execute the iptables -L. All routing rules are saved on the host. But you can see the port load balancer is listening on by executing “iptables-save”. The listening ports will be listed under REROUTING ACCEPT/INPUT ACCEPT sections:

root@ccca600b9832:/# iptables-save

:PREROUTING ACCEPT [90:19464]
:INPUT ACCEPT [90:19464]

90 port in this example is the LB listening port.

@alena thank you, I can see the CATTLE_PREROUTING entries on the host, and have one for port 80 as well :smile:

-A CATTLE_PREROUTING -p tcp -m addrtype --dst-type LOCAL -m tcp --dport 80 -j DNAT --to-destination 10.42.88.43:80  

And 10.42.88.43 is the IP of the Lb Agent, so far it looks good. Now, the Lb Agent doesn’t seem to listen on port 80, however. Nor does it look like there is any iptable rule to forward requests to port 80 in the Lb Agent container.

Hmm… I tried to restart the load balancer, to see if it yielded anything… and found these log entries in the Lb Agent container

7/31/2015 1:21:19 PMINFO: ROOT -> ./etc/monit/
7/31/2015 1:21:19 PMINFO: ROOT -> ./etc/monit/conf.d/
7/31/2015 1:21:19 PMINFO: ROOT -> ./etc/monit/conf.d/haproxy
7/31/2015 1:21:19 PM[WARNING] 211/112119 (489) : config : 'option forwardfor' ignored for proxy 'web' as it requires HTTP mode.
7/31/2015 1:21:19 PM[WARNING] 211/112119 (489) : config : 'option httpclose' ignored for proxy 'web' as it requires HTTP mode.
7/31/2015 1:21:19 PMINFO: Sending haproxy applied 1-5bd75cf9cf20cb338ddab5d0a509db1be60125cd803a45b5c79701341d6b4d3c
7/31/2015 1:21:19 PMINFO: Sending agent-instance-startup applied 2-2ad286ad710445cf6a07c6e010c784c1831a666144d1251f2a3dce77012966b7
7/31/2015 1:21:19 PMStarting monit daemon with http interface at [localhost:2812]

Noticing the two warnings regarding some config options requiring http mode… ??

@kaos its ok to see these messages on the haproxy startup; the warnings just state that some options are being ignored for tcp listeners.

Now we have to figure out why LB agent doesn’t listen on port 80. Could you share the output of /etc/haproxy/haproxy.conf file form your LB agent? This is the file where lb configs are stored (listening ports, services registered in LB, etc)

From my Lb Agent:

root@d396d2c77f5b:/# cat /etc/haproxy/haproxy.cfg 
global
	log 127.0.0.1 local0
    	log 127.0.0.1 local1 notice
        maxconn 4096
        maxpipes 1024
	chroot /var/lib/haproxy
	user haproxy
	group haproxy
	daemon

defaults
	log	global
	mode	tcp
	option	tcplog
        option  dontlognull
        option  redispatch
        option forwardfor
        option httpclose
        retries 3
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

listen web 0.0.0.0:9
root@d396d2c77f5b:/# 

Upgraded to v0.31.0 and tried again by recreating my Load Balancer, with the same results.

Steps to reproduce:

  • Add external service to some foo.example.org
  • Add service alias for above service (as it seems I can’t target external service directly when creating the lb)
  • Add Load balancer, on port 80 to port 80 on the above service alias (public, http).

Expected result: a lb agent listening on port 80, forwarding http requests to foo.example.org:80
Actual result: lb agent doesn’t listen on port 80.

So pointing a load balancer at a hostname is fairly unusual. It’s weird because it’s not clear when to resolve the name into a list of IPs… Every request would be terribly slow, observing the TTL of the DNS record is adding DNS code to the load balancer. So ha-proxy’s (and I believe nginx and most hardware balancers) behavior is to resolve the name on startup and never read it again (until restart).

That behavior is rather undesirable if external.com ever changes. In particular, Amazon ELB regularly changes the target IPs of their balancers, so the name you get for many common SaaS services would regularly break.

So the UI specifically hides external services that point to a hostname (vs IP addresses) from the dropdown because it’s not supported yet. Your step 2 is basically tricking the UI into allowing that because I don’t recursively follow alias chains when filtering them out :smile:

It appears the backend API doesn’t actually support this either (@alena) so it’s configuring based on an empty list of target IPs, and according to your output not even creating a backend entry at all.

To say we support hostnames I think we’d have to make it work the way users expect, updating periodically according to the hostname’s TTL. I think we’ll need to add-on something that watches the names that are in the config, polls them periodically, and reloads ha-proxy whenever the list of IPs changes.

Aha! :smiley:

Thank you. I did suspect it was the way I used it, somehow. But didn’t think of the issue with resolving the hostname you explained. As in our setup, the name I use will resolve to the same IP for a very long time (until we change it, for whatever reason). So I would be more than fine with a one time lookup to resolve it. But, as you say, I could just as well use the IP directly, to avoid the confusion (and hopefully), get it to work now… :smile:

Phew. It does work now.

I feel just a tad bit dumb :stuck_out_tongue: Got tricked by the “hostname support” for external services…

Thank you @vincent and @alena for your assistance with this.

Glad we figured it out… I do think we should support (changing) hostnames, and put in server-side validation to prevent it until it is supported.

1 Like