Hello, I am wondering if anyone has experience this scenario before. I changed a cluster config, updating a private repo, and once I saved I also added 4 new worker nodes to be added to the cluster.
Masters were completed without error, but the workers appeared successful but nothing started on my worker nodes. Every single pod stopped on the worker and I received this error for every single pod running on a worker nodes “unable to ensure pod container exists: failed to create container for…”
I have never experience this kind of issue, and am worried there is some bug when making a cluster config change while you also try to add worker nodes?