After restarting one of the server nodes in HA AWS EC2 Cluster, it loses the "All" Role and the nginx ingress controller


May I know if there are any known issues with AWS HA Cluster related to restart on “All” role node.

I am getting into following issues:

I have an HA cluster (with 3 nodes playing “All” role) and is front-ended with an NGINX Load balancer. Now when I restarted one of the nodes, to test HA, I found the following things,

  1. The node lost the role “All”. It shows the node as when checked from kubectl. (shows as worker on the UI). I checked the labels and the labels are not present.
  2. The nginx controller pod is not running on this node.

I see the following error messages on the cattle-agent pod:
kubectl logs -f cattle-node-agent-hbj7p -n cattle-system
INFO: Using resolv.conf: nameserver search xxxxx.compute.internal
INFO: XXXXXX/ping is accessible
INFO: Value from XXXXXXX/v3/settings/cacerts is an x509 certificate
time=“2018-12-03T08:25:55Z” level=info msg=“Rancher agent version v2.1.1 is starting”
time=“2018-12-03T08:25:55Z” level=info msg=“Option customConfig=map[roles:[] label:map[] address: internalAddress:]”
time=“2018-12-03T08:25:55Z” level=info msg=“Option etcd=false”
time=“2018-12-03T08:25:55Z” level=info msg=“Option controlPlane=false”
time=“2018-12-03T08:25:55Z” level=info msg=“Option worker=false”
time=“2018-12-03T08:25:55Z” level=info msg=“Option requestedHostname=ip-172-28-0-149.XXXXX”
time=“2018-12-03T08:25:55Z” level=info msg=“Listening on /tmp/log.sock”
time=“2018-12-03T08:25:55Z” level=info msg=“Connecting to wss:///v3/connect with token 25kb9fwwmqprnkqbz82fklzrt6mpsjsj7bgpjhbx8bn5rdrp22j9gq”
time=“2018-12-03T08:25:55Z” level=info msg=“Connecting to proxy” url=“wss:///v3/connect”
time=“2018-12-03T08:25:55Z” level=error msg=“Failed to connect to proxy” error=“dial tcp :443: connect: connection refused”