Rancher-nfs zombie volumes


I have been using rancher-nfs with both a manually configured NFS server as well as with AWS EFS.

In my compose file I declare a bunch of volumes, say 10. When I bring up the stack, Rancher creates the volumes (as directories on the share) and all work just fine.

However, Rancher always tend to leave behind a few zombie volumes as I tear down the stack. Sometimes it leaves 1 volume behind. Sometimes up to 4-6. I have rarely see Rancher actually remove ALL the volumes it created.

I see this regardless of the backend (manually created NFS or AWS EFS).

Fortunately the volume being created does have its own id so there doesn’t seem to be a risk of overlapping volumes during multiple deployments but yet it’s annoying (not to mention that the share will fill up quickly with zombies).

Is this a known issue? Has anyone seen this?

Hi, It is a known issue and we are addressing this bug right now. Related bugs: https://github.com/rancher/rancher/issues/8643, https://github.com/rancher/rancher/issues/8498

1 Like

Yep. That’s it. Thanks.