Move all traffic to second network interface

On all my machines I have eth0 being the primary network interface which is the public ip. Each machine also has en10 which is the private network interface between hosts.

I’ve got the rancher cluster running with flannel iface set to ens10 and firewall blocking all traffic in on eth0. This seems to work fine.

When creating a new cluster using RKE, I have my config set to use canal but also set the flannel iface to ens10. again these machines block all traffic over eth0 as our external load balancer works on the private network to route ports 80 and 443 to the cluster.

My issue is i’m seeing some traffic (for example when viewing pod logs or sing “Execute shell”) still attempting to go between cluster nodes (this is the RKE nodes) over eth0. Have I missed a setting?

My RKE template contains this for network:

  network:
    canal_network_provider:
      iface: ens10
    mtu: 0
    options:
      canal_flannel_backend_type: vxlan
      canal_iface: ens10
    plugin: canal
  restore:
    restore: false

Specifically I can see almost all traffic goes over ens10 except when viewing logs or executing shell. This still goes over 10250 using eth0.

Can this be made to use ens10 or do I really need to open up 10250 to the world?