How does a "workloadID_my-service-name" get added to the service's pod selector?

i recently started to notice that in some cases either Rancher (latest thing to be updated, 2.6 now) or Kubernetes started to add a new pod selector to services which causes them to fail selecting.
Creating a new service “release-component-service”, i add “app=release_component” as a selector, and “app=release_component” as a pod label too. Works nicely. For a few minutes, sometimes.
At some point something adds a a second selector to the service “workloadID-release-cpmponent-service” which causes the selector to fail and the service is no longer able to find the expected pods.
This seems to happen without manual interaction, at an undefined time interval from the latest manual action. Sometimes right away, but in other cases it’s been 6 hours.
The effect never happened before, but i did a cluster update to Rancher 2.6 recently.
Any idea on how to prevent/fix this would be very much appreciated.

Confirming that I’m experiencing the same on 2.6. Also 2.6.1-rc2

Found a workaround @Ulli_Berthold.

(You might not have to do it exactly in the following order, but this is the order of steps that I used)

  1. Clone the Deployment, however for this one REMOVE the ports.
  2. Remove the old Deployment
  3. In the new deployment, create a port but DO NOT CREATE A SERVICE.
  4. Delete the service and create a new service with the selectors that you want. This time the selectors will match.
  5. Delete the Ingress and create a new Ingress that points to the new service.

Hope that helps. Just a comment, I was a tad surprised that I needed to re-create the Ingress in order to make this work, since I thought the problem was the relationship between the Deployment and the Service.

Note that I did this on 2.6.1-rc3. Your mileage may vary if you use a different version.

Yes, recreating the index worked in the end for me too.
Just deleting the service and/or deployment sometimes worked for a few hours, but at some point the wrong workloadId selector reappeared. Since deleting deployment, ingress, and service, the deployment is stable again.

But that was at some point in the middle of the night, so i forgot to report it here