Draining a node does not evict pods

The documentation states the following behaviour for draining nodes: “Marks the node as unschedulable and evicts all pods.”

Draining a node on my cluster (mode: safe) marks the node as unschedulable but does not remove pods from the node or reschedules pods on other nodes. When draining the node I set the grace period for terminating pods manually to 5 secs.

Are there any further default timeout settings in the cluster, before evicting happens and where/how to tweak them?

Setup: 3 worker nodes, 1 etcd node (do)
Rancher version v2.2.4
Pod: bare nginx

Which pods are not being re-scheduled? From your description, you have Pod: bare nginx. Is that a standalone Pod, and not a deployment/daemonset, etc?

From the Drain page:

Permanently delete:

    Standalone Pods and their data
    Pods with "Empty Dir" volumes and their data.

I found that the only way to ensure that pods are rescheduled is to use the Aggressive mode, and I have drained nodes many times that way.

Have you tried the Aggressive mode?

Yes, I used a standalone pod.
I found out that the pod was actually rescheduled to an available worker, so actually rescheduling is working. The point which let me intitiallly think that the pod lives still on the drained node was that the endpoint was not updated, still having the IP of the drained node. I did not set up an ingress, nor any dns related stuff, just made a quick installation and played a little bit with it.

From my point of view, any endpoints directing to a drained node should at least be labeled as unavailable or even change the IP along with the moved/rescheduled pod.

I did not set up an ingress

Yes, for a true test, try setting up a Deployment with an Ingress, and you should get much better success.