After I’ve created a load balancer, I don’t see the targets and ports etc in the UI for it (just the containers).
Also, when I click edit, I can add new targets, but I would expect to see the current targets as well (in order to edit/remove them).
I have an underlying issue with mapping service ports etc, but these issues make me unsure of where the real issue lies (i.e. it seems like the target I added when creating the load balancer actually is used, but I can’t see it).
Can you provide what version of Rancher you are using? Load balancers have been going through a lot of enhancements over the last couple of releases, but I have no issues with creating a load balancer and seeing the targets when I edit.
There may be some issues related to your proxy as the NO_PROXY must be set up per the comment in order for communication between the server and agents to occur.
So, with NO_PROXY properly set, my targets in the load balancer shows up ok, but I still have an issue with the exposed ports not being reachable.
That is, there are no exposed/mapped ports when running docker inspect on the Lb Agent, as both Config.ExposedPorts as well as NetworkSettings.Ports are {} (and HostConfig.PortBindings is null).
However, in the UI for the Lb Agent, I see that it ought to expose port 80:
Not sure what else to look for at the moment… trying to talk to port 80 of Lb Agent on its container doesn’t work either (connection refused) on both the docker ip 172… and the rancher assigned ip 10.42…
Looking at /etc/haproxy/haproxy.cfg tough, it says on the last line: listen web 0.0.0.0:9, which to me is a bit surprising (but I’m not familiar with haproxy, so don’t know if this is a common haproxy thing, but it looks suspect to me). And curl’ing to the container at port 9 gives an empty reply (so it does listen on that port).
This is as far as I’ve got at the moment… hopefully you can spot something obvious I’m doing wrong
@kaos all LB published ports should get registered in CATTLE_PREROUTING chain in host iptables. Example:
ADDRTYPE match dst-type LOCAL tcp dpt:90 to:10.42.243.132:90
Could you share the output of iptables-save taken on the host your lb agent resides?
Also in 0.30 we’ve added support for internal listening ports for the LB. So you get a choice whether you want an LB listening port to be published on the host, or remain internal. Public option is the default one, but just wanted to make sure you haven’t chosen “Internal” option - I know its unlinkely - in the UI
P.S. If you login inside lb agent, you won’t see the forwarding chain there when execute the iptables -L. All routing rules are saved on the host. But you can see the port load balancer is listening on by executing “iptables-save”. The listening ports will be listed under REROUTING ACCEPT/INPUT ACCEPT sections:
@alena thank you, I can see the CATTLE_PREROUTING entries on the host, and have one for port 80 as well
-A CATTLE_PREROUTING -p tcp -m addrtype --dst-type LOCAL -m tcp --dport 80 -j DNAT --to-destination 10.42.88.43:80
And 10.42.88.43 is the IP of the Lb Agent, so far it looks good. Now, the Lb Agent doesn’t seem to listen on port 80, however. Nor does it look like there is any iptable rule to forward requests to port 80 in the Lb Agent container.
@kaos its ok to see these messages on the haproxy startup; the warnings just state that some options are being ignored for tcp listeners.
Now we have to figure out why LB agent doesn’t listen on port 80. Could you share the output of /etc/haproxy/haproxy.conf file form your LB agent? This is the file where lb configs are stored (listening ports, services registered in LB, etc)
So pointing a load balancer at a hostname is fairly unusual. It’s weird because it’s not clear when to resolve the name into a list of IPs… Every request would be terribly slow, observing the TTL of the DNS record is adding DNS code to the load balancer. So ha-proxy’s (and I believe nginx and most hardware balancers) behavior is to resolve the name on startup and never read it again (until restart).
That behavior is rather undesirable if external.com ever changes. In particular, Amazon ELB regularly changes the target IPs of their balancers, so the name you get for many common SaaS services would regularly break.
So the UI specifically hides external services that point to a hostname (vs IP addresses) from the dropdown because it’s not supported yet. Your step 2 is basically tricking the UI into allowing that because I don’t recursively follow alias chains when filtering them out
It appears the backend API doesn’t actually support this either (@alena) so it’s configuring based on an empty list of target IPs, and according to your output not even creating a backend entry at all.
To say we support hostnames I think we’d have to make it work the way users expect, updating periodically according to the hostname’s TTL. I think we’ll need to add-on something that watches the names that are in the config, polls them periodically, and reloads ha-proxy whenever the list of IPs changes.
Thank you. I did suspect it was the way I used it, somehow. But didn’t think of the issue with resolving the hostname you explained. As in our setup, the name I use will resolve to the same IP for a very long time (until we change it, for whatever reason). So I would be more than fine with a one time lookup to resolve it. But, as you say, I could just as well use the IP directly, to avoid the confusion (and hopefully), get it to work now…