Deployment stays in "modified" state in Fleet

Doesn’t seem to be a Fleet category, so Rancher 2.5 seems to be the closest category.

I deployed a few Minecraft servers to a k3s cluster with Rancher 2.5.2 on it.
Works just fine, the servers comes up after yaml files are created in the Git repo.

But in Fleet the Deployment objects stay in a “Modified” warning state. Can’t figure out what this means, and why it happens.

Anyone knows?

Looks like this:

Seems like Fleet doesn’t like deployment manifests with container resource limits specified.
The only workaround I’ve found is to remove the per-container resources limits.
Might work to place them on namespace level instead, haven’t tested.

A deployment such as this will cause “Modified” state describe above:

---
apiVersion: apps/v1
kind: Deployment
metadata:
    namespace: mc-java
    name: mcb-creative
    labels:
        app: mcb-creative
spec:
    replicas: 1
    revisionHistoryLimit: 1
    strategy:
        rollingUpdate:
            maxSurge: 0
            maxUnavailable: 1
        type: RollingUpdate
    template:
        metadata:
            name: mcb-creative
            labels:
                app: mcb-creative
        spec:
            containers:
                - name: mcb-creative
                  image: itzg/minecraft-server:latest
                  imagePullPolicy: Always
                  resources:
                      requests:
                          cpu: 1
                          memory: 4Gi
                      limits:
                          cpu: 6
                          memory: 5Gi
                  ports:
                      - name: game-port
                        containerPort: 25565

The bundle summary looks like this. The resource limitations show up as patches that cannot be applied (?)

  summary:
    desiredReady: 1
    modified: 1
    nonReadyResources:
    - bundleState: Modified
      modifiedStatus:
      - apiVersion: apps/v1
        kind: Deployment
        name: mc-survival-1
        namespace: mc-java
        patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"mc-survival-1"}],"containers":[{"name":"mc-survival-1","resources":{"limits":{"cpu":6},"requests":{"cpu":1}}}]}}}}'
      - apiVersion: apps/v1
        kind: Deployment
        name: mcb-creative
        namespace: mc-java
        patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"mcb-creative"}],"containers":[{"name":"mcb-creative","resources":{"limits":{"cpu":6},"requests":{"cpu":1}}}]}}}}'
      - apiVersion: apps/v1
        kind: Deployment
        name: mcb-lobby
        namespace: mc-java
        patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"mcb-lobby"}],"containers":[{"name":"mcb-lobby","resources":{"limits":{"cpu":"6000m"},"requests":{"cpu":"1000m"}}}]}}}}'
      name: fleet-local/local
    ready: 0
  unavailable: 0
  unavailablePartitions: 0

Solved: Quote the CPU requirements:

  resources:
    requests:
      cpu: "1"
      memory: 3Gi
    limits:
      cpu: "6"
      memory: 5Gi

I have been seeing the same sort of behavior on something as straight forward as nginx-ingress. My fleet.yaml looks like

defaultNamespace: nginx-ingress
helm:
  chart: https://helm.nginx.com/stable/nginx-ingress-0.6.1.tgz
targetCustomizations:
- name: fleet-test-1
  clusterSelector:
    matchLabels:
      cloudsoda.io/cluster-name: fleet-test-1
- name: soda-fleet-test-2
  clusterSelector:
    matchLabels:
      cloudsoda.io/cluster-name: fleet-test-2

And while things are deployed and appear to be working the Deployment and ConfigMap seem to be stuck in the “Modified” state. The error message is also not really clear:

Modified(2) [Bundle nginx]; configmap.v1 nginx-ingress/nginx-nginx-ingress modified {"data":null}; deployment.apps nginx-ingress/nginx-nginx-ingress modified {"spec":{"template":{"spec":{"hostNetwork":false}}}}

I would also like to note I am seeing the same sort of thing with MetalLB. Everything deploys and works, but the PodSecurityPolicy, ConfigMap and DaemonSet all stay in the “Modified” state while k8s says everything looks good.

Both the Nginx and MetalLB are known good external helm charts, so it seems odd that they would both be invalid like this.

same problem with Loki, works great, most basic deploy

 desiredReady: 1
    modified: 1
    nonReadyResources:
    - bundleState: Modified
      modifiedStatus:
      - apiVersion: apps/v1
        kind: StatefulSet
        name: loki-test2
        namespace: loki-test2
        patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"loki"}],"containers":[{"$setElementOrder/volumeMounts":[{"mountPath":"/etc/loki"},{"mountPath":"/data"}],"env":null,"name":"loki","volumeMounts":[{"mountPath":"/data","subPath":null}]}],"initContainers":[],"nodeSelector":{},"tolerations":[]}}}}'
      name: fleet-default/c-qsxgn
    ready: 0
  unavailable: 0
  unavailablePartitions: 0

@nex916 - Rancher 2.5.6 solved this issue.