When in this configuration,
If down of AZ-c(In case of down of AZ-c NW),
does failover all nodes from AZ-c to AZ-a?, Or is the instance re-created after AZ-c recovery?
(failover meaning of create new instances on AZ-a).
Nodes are not automatically replaced. What you’ve described is not a good configuration. For the cluster to operate, a strict majority of the etcd nodes must be available. As you’ve laid it out, if “az-c” goes down you have no quorum for etcd because 2 of 3 are down.
For your worker nodes, I would expect that you would run these under an ASG/ALB and if the health-check fails a node would be terminated and replaced by AWS. The replacement node will register itself to the cluster as part of you user-data launch configuration. Is that what you have ?