Service ports automatic reset bug?

I created 2 workloads under the system project. One is a kibana and one is an elastic search with 1-1 container. On the “service discovery” tab I configured the default ports (modified from 42 to 5601 on kibana). After a day it started to not work the way it did before so I checked again, set it again, and the next morning it reseted again. I tried to edit the yaml directly, tried to use the ui, but something is still off.

My current kibana yaml is:

apiVersion: v1
kind: Service
metadata:
  annotations:
    field.cattle.io/ipAddresses: "null"
    field.cattle.io/targetDnsRecordIds: "null"
    field.cattle.io/targetWorkloadIds: '["deployment:elastic-kibana:kibana"]'
    kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"field.cattle.io/targetWorkloadIds":"[\"deployment:elastic-kibana:kibana\"]","workload.cattle.io/targetWorkloadIdNoop":"true","workload.cattle.io/workloadPortBased":"true"},"creationTimestamp":"2019-12-16T22:54:56Z","labels":{"cattle.io/creator":"norman"},"name":"kibana","namespace":"elastic-kibana","ownerReferences":[{"apiVersion":"apps/v1beta2","controller":true,"kind":"deployment","name":"kibana","uid":"0be9c00c-c3f1-4f76-92f8-60f5eeeaabbe"}],"resourceVersion":"74517960","selfLink":"/api/v1/namespaces/elastic-kibana/services/kibana","uid":"496ff727-72b2-43b1-95f6-1bdd27ccfc5c"},"spec":{"clusterIP":"None","ports":[{"name":"web","port":5601,"protocol":"TCP","targetPort":5601}],"selector":{"workload.user.cattle.io/workloadselector":"deployment-elastic-kibana-kibana"},"sessionAffinity":"None","type":"ClusterIP"}}'
    workload.cattle.io/targetWorkloadIdNoop: "true"
    workload.cattle.io/workloadPortBased: "true"
  creationTimestamp: "2019-12-16T22:54:56Z"
  labels:
    cattle.io/creator: norman
  name: kibana
  namespace: elastic-kibana
  ownerReferences:
  - apiVersion: apps/v1beta2
    controller: true
    kind: deployment
    name: kibana
    uid: 0be9c00c-c3f1-4f76-92f8-60f5eeeaabbe
  resourceVersion: "76012777"
  selfLink: /api/v1/namespaces/elastic-kibana/services/kibana
  uid: 496ff727-72b2-43b1-95f6-1bdd27ccfc5c
spec:
  clusterIP: None
  ports:
  - name: default
    port: 42
    protocol: TCP
    targetPort: 42
  selector:
    workload.user.cattle.io/workloadselector: deployment-elastic-kibana-kibana
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

In the kubectl.kubernetes.io/last-applied-configuration part there is my last conf for sure!

My questions:

  • why the reset happens?
  • how I can make it stop?

I just ran into the same problem, I changed the port and targetPort values for multiple services and after a while they all came back to 42.

Does anyone know something about this behaviour?