L4 Balancer, pending, External-IP

If I create a new workload and select L4 load balancer, it hangs at “pending” while trying to get external ip.

Is there any missing configuration to allow balancer to get an external ip?

thanks!

1 Like

I have the same issue here, do you get any solution about that problem?

Thank you

Hey, was hitting similar brick walls with my vmware test lab, but not so on my AWS lab. Manage to find info on how the L4 and L7 load balancers work below

Check out
https://rancher.com/docs/rancher/v2.x/en/concepts/load-balancing/

Depending on the workload you are deploying check what layer the load balancer is as it may not be supported on the Cloud / VM platform used to run containers.

eg, if your running rancher on a vSphere environment then Layer 4 lb’s are not supported. However Layer 7 lb’s are but aren’t supported by all cloud platforms (ie Azure).

Lastly depending on the traffic being passed you may to confirm what lb layer you need.

Yes, I get that, the problem with Ingress (L7), with Rancher 2.0, is not possible to set a custom port, it only bind on 80 and 443, on Rancher 1.6, it launch a ha-proxy instance with Ingress, using the annotation http.port, it help us with multiple instances of Ingress, today is not possible, once Ingress doesn’t support custom port.

I had the same issues and yes l4 loadbalancer is not supported for vsphere but it seems rancher is setting up ingress-nginx in the cluster so you can use the tcp-services configMap to configure tcp loadbalancing. Refer to ingress-nginx documentation for more details. https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

Ingress-nginx is deployed in its own namespace but when you assign the namespace to a project you can edit the configMap via rancher ui.

Hope this helps but I yet dont know if this has any side effects…

Yes, this realy change the ports, the problem with this approach is, it only changes the port from 80 to other port, but, I needed some solution like Rancher 1.6 does, create a instance of Ingress, create a instance of ha-proxy, and using the annotation http.port I can set a different port for each ingress setup.

I’m using AWS EC2, does it mean that when setting L4 balancer I need to manually setup ELB/NLB on AWS manually? I thought that rancher owuld launch an internal ngnix for that… how it works exactly?
Thanks,

Same problem here. I have the same need of @Kleber_Rocha and I’m running on EC2 as @NachoColl.

Have you find something? How you solved the issue?

Bump^

Facing the same problem

Any news on this issue? Currently facing it when deploying bitnami helm charts, creates an L4 load balancer stuck on pending

There’s no news to be had here… generic k8s clusters have no controller configured to handle services of type=LoadBalancer. If you try to make one and nothing is configured, they just sit in pending state forever.

If you want them to work you need to:

  • Set up the appropriate Cloud Provider for the hosting company you’re using, if one exists (AWS, etc)

  • or a 3rd party implementation (for on-prem installations, mostly) like MetalLB or kube-vip.

  • or our k3s distribution includes a (very) basic implementation which just uses ports on the node so that they do something useful by default.

sorry to hijack the thread, but i am having wierd KUBE-VIP issue, when a more “heavy” deployment like replacing local-storage with Longhorn for example, the clusters as masters or agents’s API starts not being availavable which triggers kube-vip to start leader election (tried both etcd/datastore) and this will continue until the deployment finishes and cause random disconects from the cluster…

time="2021-06-23T08:28:03Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2021-06-23T08:28:03Z" level=info msg="Namespace [kube-system], Hybrid mode [true]"
time="2021-06-23T08:28:03Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-svcs-lock], id [cp-01]"
I0623 08:28:03.422982       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/plndr-svcs-lock...
time="2021-06-23T08:28:03Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [cp-01]"
I0623 08:28:03.428041       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/plndr-cp-lock...
time="2021-06-23T08:28:13Z" level=info msg="new leader elected: cp-03"
time="2021-06-23T08:28:13Z" level=info msg="Node [cp-02] is assuming leadership of the cluster"
I0623 08:30:49.238720       1 leaderelection.go:253] successfully acquired lease kube-system/plndr-cp-lock
time="2021-06-23T08:30:49Z" level=info msg="Node [cp-01] is assuming leadership of the cluster"

is there something i am missing ?