Kubernetes pods not deployed to a node

I have a cluster with 3 nodes set via the Rancher. Two of them are working fine but one of them has missing Kubernetes pods like CoreDNS etc. (on a node with external IP 10.105.1.76)

Below is the output for that cluster:

> kubectl get pods,svc -o wide -n kube-system
NAME                                          READY   STATUS      RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
pod/canal-8bf2l                               2/2     Running     2          19d     10.105.1.78   hdn-dev-app68   <none>           <none>
pod/canal-9kfpl                               2/2     Running     0          5m52s   10.105.1.76   hdn-dev-app66   <none>           <none>
pod/canal-vq474                               2/2     Running     6          19d     10.105.1.77   hdn-dev-app67   <none>           <none>
pod/coredns-849545576b-gcf7p                  1/1     Running     1          19d     10.42.2.11    hdn-dev-app68   <none>           <none>
pod/coredns-849545576b-r2tvt                  1/1     Running     1          15m     10.42.1.15    hdn-dev-app67   <none>           <none>
pod/coredns-autoscaler-84bf756579-96q9h       1/1     Running     1          34m     10.42.1.14    hdn-dev-app67   <none>           <none>
pod/metrics-server-697746ff48-j2s5t           1/1     Running     2          56m     10.42.2.12    hdn-dev-app68   <none>           <none>
pod/rke-coredns-addon-deploy-job-2sjlv        0/1     Completed   0          6d19h   10.105.1.77   hdn-dev-app67   <none>           <none>
pod/rke-ingress-controller-deploy-job-9q4c2   0/1     Completed   0          6d19h   10.105.1.77   hdn-dev-app67   <none>           <none>
pod/rke-metrics-addon-deploy-job-cv42h        0/1     Completed   0          6d19h   10.105.1.77   hdn-dev-app67   <none>           <none>
pod/rke-network-plugin-deploy-job-4pddn       0/1     Completed   0          6d19h   10.105.1.77   hdn-dev-app67   <none>           <none>

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
service/kube-dns         ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   19d   k8s-app=kube-dns
service/metrics-server   ClusterIP   10.43.220.58   <none>        443/TCP                  19d   k8s-app=metrics-server

I have tried to re-register that node in the cluster few times but still has the same problem.

Any thoughts what went wrong?

CoreDNS is not a daemonset (by default) and does not run on every node.

OK, didn’t know that but in that case I have ended up with other problem described here as an app from a pod was not able to reach a database on another pod but both within the same namespace on the node where CoreDNS was not running neither can’t find anything related to that app request for a hostname resolve in the CoreDNS logs.