EC2 rke Cluster provision on private address

Hi there, i’m trying to deploy a rke cluster through the (well designed) ec2 node templating system.

My rancher server sits in the same vpc / subnet than the worker/etcd/ nodes and can communicate without restriction on the private subnet (172.32.0.0/16). Public subnet only allows 80 & 443. Problem is that once the rancher server provision some nodes, it correctly gets both address and private address, visible in the server log…

2021/05/05 10:00:18 [INFO] [node-controller-rancher-machine] (test14) created instance ID i-0714e7xxxxxf9ya8, IP address xx.xx.xx.xx, Private IP address 172.31.8.147

…but finally tries to establish a ssh tunnel and a docker connection (tcp 2376) over the public address (firewalled).

Is there a way to explicitly define, in the EC2 node template, a way that provisioning should happen on the private / internal address ? i tried many key/values settings within the EC2 node template (Engine option, Engine Label, Engine Environment) without success… and even didn’t found explicit informations in the documentation :confused:

Manu