"Cluster health check failed: cluster agent is not ready" on active nodes

I apologize if this has been addressed. I couldn’t find an existing topic that matched.
My Rancher cluster is up and healthy, but when trying to add another cluster the health check fails.

Using both the cluster import using a cluster built with RKE, and the bare metal cluster creation using the docker command, my cluster nodes show as Active and Healthy. They are configured as etcd,control plane, and worker nodes. The cluster reports:

This cluster is currently Error ; areas that interact directly with it will not be available until the API is ready.
Cluster health check failed: cluster agent is not ready

On the cluster built with RKE:

kubectl get --raw=‘/readyz?verbose’

Shows health checks pass. (I can’t get the kubeconfig for the cluster nodes built with the docker command)

Where can I look to determine why the health checks are failing?

I figure it out.
I was using host files for name resolution an needed my rancher server to be resolved via dns.
Saw an error that cluster-register couldn’t start on the new cluster

docker logs kublet

… skipping: failed to “StartContainer” for “cluster-register”

I found the stopped cluster-register container with this

docker container ls -a | grep register

And its logs showed

INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: xxxxx.com is not accessible (Could not resolve host: rancher.xxxxxx.com)

I reconfigured my host to be able to resolve the rancher server via DNS and it fixed the issue

Where did you configure this?
How did you do that?