IIRC this is basic Kubernetes. The local kubelet on each worker node manages the pods, and when the node dies, Rancher/K8s will report the last known information from the kubelet, which in this case lists 16 pods. You can rest assured those 16 pods are not actually running since the host is down, Rancher is just reading stale information from the kubernetes cluster.
However, since those pods are inaccessible and thus their liveness probes (and readiness probes) intrinsically fail, the higher deployment wrapping these pods will have scheduled their replacements on other surviving nodes (the ReplicaSet and similar) and any services backed by them will have unenrolled the dead pods’ internal IP addresses from the endpoint groups. Once the missing node comes back online those counters should update I think.
I guess reading into your first post though, did kubernetes not schedule new replacements for those pods on surviving nodes? These aren’t one-off pods you created outside of the deployment construct right?