Setting Max Unavailable to 1 still keeps new failed containers

I have the Max Unavailable setting set to 1, yet there are still many Failed containers that persist

Max Unavailable has to do with how many pods are allowed to be unavailable during an upgrade, so that you maintain a minimum amount of available pods during the upgrade. It has nothing to do with the garbage collection of old/failed pods.

Is there a way to tell it to clean up the Failed pods?

https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/

1 Like

Ah I see, maximum-dead-containers is default to -1

Iā€™m guessing the thought was if there was something important in the dead containers to keep?