Rancher/hyperkube log grows indefinitely

Hi all
I have a weird problem. Just on one of rancher/hyperkube running in a worker, I found a f69b3d2afedf5ed199318f5ec0473fb0a8400a1723788e8bdd8b4785a711b28b-json.log that grow indefintely.
Inside this log I found a row like this
docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod “unifi-59cffd8756-5ffnl_unifi”: CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container “b5e2fdd7657426c627bee008dc75e2d9335b7c406c0c036eddc3e2c075984fe7”\n",“stream”:“stderr”,“time”:“2021-08-17T13:19:07.343119488Z”}
every second.
We have a container called unifi_something, in another node but the ID is totally different.
I think this happen after an update
How can I solve this problem?
There is something can rescan the active pods and remove old stuck containers?