Ingress 503 on workload upgrade

I’m running into a new problem now where when I upgrade a workload the ingress L7 load balancer no longer works. It keeps spitting out 503.

This also happens if I go in and delete a workload pod. It seems that ingress stops working if the pod changes to a different node.

To fix it I have to go in delete the load balancer and add it again with the exact same settings. Then it works again.

I am on Rancher 2.0.6

Note looking at the ingress YAML i noticed:

spec:
  rules:
  - host: dashboard.mywebsite.com
    http:
      paths:
      - backend:
          serviceName: ingress-4b77e098a0471af991bef9cbc4c824ea
          servicePort: 80

shouldn’t serviceName be the name of the service (in this case dashboard)?

Yes, it would be the service name, that definitely seems wrong.

I went into all my ingress and manually changed serviceName to match the workload names. It seems to be working now. Hopefully it stays working.

I am still left to wonder why they weren’t created with the correct names to begin with. I created the ingress like normal using the rancher ui.

I’m all into doing things with YAML and kubectl - but from what I recall, this happens when you use “Target Backend” Workload instead of Service.

This would make sense. I did target “workload” instead of “service” because the apps only show up in workload. When I was on Rancher 2.0.1 it would show up under “service” but now it only shows up under “workload”.

It should show in services if you create a service for your workload (recommended in my opinion)

I think I discovered my problem.

Before I created my workloads manually through rancher. This way a service was automatically created for each workload. But then something happened to my rancher deployment and I had to start from scratch. This time instead of deploying workloads manually I used the YAML files from my earlier deployment. I guess however that when a workload is deployed via YAML files a service is not created. And I didn’t save the service YAML files.

Well hopefully this thread helps someone else if they make the same mistake!

Thanks for your help @etlweather

We’ve been experiencing the same thing. The person who set everything up is away on leave so I can’t ask them how they created everything but what I did see was all the service names were ingress-.

The field.cattle.io/publicEndpoints now all have the service name however the kubectl.kubernetes.io/last-applied-configuration still have the references to ingress-

Should these be manually updated as well in the yaml? Is this a concern?

Are you going to be deploying updates to the apps before he gets back? I only had this problem after upgrading workloads, or if the container had to restart.

It’s been triggered after hours, but maybe an auto heal is also triggering?

I would make the change then. Change serviceName in the ingress yaml. Match it to the workload name. After you do that you should see that in Service Discover page service name has also changed to the workload name.

Yeah I had already made the change and everything updated automatically except for the section

kubectl.kubernetes.io/last-applied-configuration

That’s the part confusing me. How is that section used?

Is it related to the known issue of the ingress of the new version ?