Despite have multiple kube-apiserver nodes, Rancher seems to pick a single “leader” node and reference it as apiEndpoint. Initially, I thought this was related to the cattle-system/cattle-cluster-agent pod phoning home, however, even after that pod was rescheduled to another host during failure, the apiEndpoint remained at the old value.
Are there any docs on what all this apiEndpoint is actually used for?
Any insight on how this is managed or how it can be changed to a different kube-apiserver K8s node?
Is there any reason this value isn’t a list of all kube-apiserver nodes in the cluster, rather than a single node? This seems like a single point of failure with a different cardinality than master nodes.