Rke up and etcd healthcheck issue

Hello

In a cloud provider context, i have sometimes to restart VMs where private IP could change.
In that case, rke is working great but i’m facing from time to time issue with rke up and etcd when private ip changed on master nodes - i have 3 master nodes .

Of course etcd are kept, but rke up fails on etcd healthcheck because etcd container try to start with previous peer private IP.
But this is not systematic, sometimes etcd container start successfully and peers etcd join the cluster just fine with new private IP.
I have a suspicious this is linked to cluster.rkestate to keep same certificates but i could not have found yet the exact relationship
I’m using the initial-cluster etcd options with the same name of hosts

When it is failing, the cluster is almost not recoverable

Could someone help me to find what would be the exact process in that case ?

Thanks for your help !

Vincent