Deployment keeps making Nodeports pods

This used to happen whenever I updated a deployment, I’ve since changed my workflow to use the scaling and upgrade policy that forces the nodes to actually finish up and close out before creating the new pods. But whenever I try to start this deployment (which used to work up until today, I don’t know what caused this), it suddenly generates a large amount of “nodeports” pods.


After taking this image, I rushed to lower the scale, it created 91 in total. If this helps at all, this is the yaml currently setup with the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "23"
    field.cattle.io/publicEndpoints: "null"
  creationTimestamp: "2022-06-22T21:17:15Z"
  generation: 141
  labels:
    workload.user.cattle.io/workloadselector: apps.deployment-pi-hole-pihole
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:workload.user.cattle.io/workloadselector: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:strategy:
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:workload.user.cattle.io/workloadselector: {}
          f:spec:
            f:affinity: {}
            f:containers:
              k:{"name":"pihole"}:
                .: {}
                f:env:
                  .: {}
                  k:{"name":"TZ"}:
                    .: {}
                    f:name: {}
                    f:value: {}
                  k:{"name":"WEB_PORT"}:
                    .: {}
                    f:name: {}
                    f:value: {}
                  k:{"name":"WEBPASSWORD"}:
                    .: {}
                    f:name: {}
                    f:value: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":53,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:hostPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":53,"protocol":"UDP"}:
                    .: {}
                    f:containerPort: {}
                    f:hostPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":67,"protocol":"UDP"}:
                    .: {}
                    f:containerPort: {}
                    f:hostPort: {}
                    f:name: {}
                    f:protocol: {}
                  k:{"containerPort":36536,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:hostPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources: {}
                f:securityContext:
                  .: {}
                  f:allowPrivilegeEscalation: {}
                  f:capabilities:
                    .: {}
                    f:add: {}
                  f:privileged: {}
                  f:readOnlyRootFilesystem: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:volumeMounts:
                  .: {}
                  k:{"mountPath":"/etc/pihole"}:
                    .: {}
                    f:mountPath: {}
                    f:name: {}
            f:dnsConfig: {}
            f:dnsPolicy: {}
            f:hostNetwork: {}
            f:hostname: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
            f:volumes:
              .: {}
              k:{"name":"pihole"}:
                .: {}
                f:name: {}
                f:persistentVolumeClaim:
                  .: {}
                  f:claimName: {}
    manager: agent
    operation: Update
    time: "2022-08-18T23:24:39Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:field.cattle.io/publicEndpoints: {}
      f:spec:
        f:replicas: {}
        f:template:
          f:spec:
            f:affinity:
              f:nodeAffinity:
                .: {}
                f:requiredDuringSchedulingIgnoredDuringExecution: {}
              f:podAffinity: {}
              f:podAntiAffinity: {}
            f:containers:
              k:{"name":"pihole"}:
                f:image: {}
    manager: rancher
    operation: Update
    time: "2022-09-04T05:59:45Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
    manager: k3s
    operation: Update
    subresource: status
    time: "2022-09-04T06:13:44Z"
  name: pihole
  namespace: pi-hole
  resourceVersion: "42161512"
  uid: d0c42082-38d1-44ff-afe9-4411a8d5859b
spec:
  progressDeadlineSeconds: 600
  replicas: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: apps.deployment-pi-hole-pihole
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: apps.deployment-pi-hole-pihole
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: pihole
                operator: Exists
        podAffinity: {}
        podAntiAffinity: {}
      containers:
      - env:
        - name: TZ
          value: '''America/Chicago'''
        - name: WEBPASSWORD
          value: ###censored###
        - name: WEB_PORT
          value: "333"
        image: pihole/pihole:2022.09.1
        imagePullPolicy: IfNotPresent
        name: pihole
        ports:
        - containerPort: 53
          hostPort: 53
          name: 53tcp
          protocol: TCP
        - containerPort: 53
          hostPort: 53
          name: 53udp
          protocol: UDP
        - containerPort: 67
          hostPort: 67
          name: 67udp
          protocol: UDP
        - containerPort: 333
          hostPort: 333
          name: 80tcp
          protocol: TCP
        resources: {}
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_ADMIN
          privileged: false
          readOnlyRootFilesystem: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/pihole
          name: pihole
      dnsConfig: {}
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      hostname: pihole
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: pihole
        persistentVolumeClaim:
          claimName: pihole
status:
  conditions:
  - lastTransitionTime: "2022-09-04T05:49:33Z"
    lastUpdateTime: "2022-09-04T05:59:45Z"
    message: ReplicaSet "pihole-76c8c8b4b9" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2022-09-04T06:13:44Z"
    lastUpdateTime: "2022-09-04T06:13:44Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 141

So I restarted the nodes, and it’s no longer causing this behavior. However, it’s now causing a new problem when it tries to spin up a pod saying:

0/4 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 4 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

I have carefully looked through my services, the only service that uses the ports needed for this deployment is the service I set up for this very deploy. Is there anything I can do to try to troubleshoot further why it thinks these ports are in use, or if they are in use, figure out what is using them so I can evaluate on whether I need that service or not.