Cluster communication fails when spanning across class B and C networks

Our production cluster is running fine on k8s 1.12.3-rancher1-1 having several nodes in two different networks: (2) and (6).
When upgrading the cluster to any newer version of k8s (verified with 1.16.4-rancher1-1 and 1.17.5-rancher1-1) communication between nodes of these networks fails.

To reproduce the issue set up the following environment. It is not necessary to perform an upgrade from 1.12.3 to a new version. A clean install of any new version seems to produce the same result:

  • 3 VMs using “Ubuntu LTS 16.04”
    • one VM: GATEWAY forwarding packages between the networks (; as well as access to the internet
    • one VM: CORE01 ( as etcd, controlplane and worker
    • one VM: FRONTEND01 ( as worker



# frontend nodes
  - address:
      - worker
    hostname_override: frontend01
      tier: frontend
      environment: Production
    user: deployuser
    ssh_key_path: ./frontend.key
    # note: for support of a key with a passphrase see

# core nodes
  - address:
      - controlplane
      - etcd
      - worker
    hostname_override: core01
      tier: core
      environment: Production
    user: deployuser
    ssh_key_path: ./backend.key
    # note: for support of a key with a passphrase see

# Cluster Level Options
cluster_name: production
ignore_docker_version: false
kubernetes_version: "v1.16.4-rancher1-1"

# SSH Agent
ssh_agent_auth: false # use the rke built agent

# deploy an ingress controller on all ''
    provider: nginx
      server-tokens: false
      ssl-redirect: false


host rule
FRONTEND01 allow 8472/udp from
FRONTEND01 allow 10250/tcp from
FRONTEND01 allow ssh
CORE01 allow 6443/tcp from
CORE01 allow 8472/udp from
CORE01 allow ssh
  • Deploy the cluster using rke (v1.0.8) and wait for it to be ready.
  • Launch a centos-pod on one of the nodes, e.g. CORE01
    kubectl run -it centos1 --rm --image=centos --restart=Never --overrides='{"apiVersion":"v1","spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"","operator":"In","values":["core01"]}]}]}}}}}' --kubeconfig kube_config_cluster.yml -- /bin/bash
  • ping your favourite external site
    for i in {1..100}; do ping -c 1; done

Notice the very slow speed in name resolution that often fails completely.

  • Stop FRONTEND01 and wait for the cluster to recognize the lost node
  • ping again

Name resolution works fast and ping succeeds every time.

  • Reset all VMs and change the network configuration for GATEWAY and CORE01: Put it in a network segment (but not the same as FRONTEND01!)
  • Deploy the cluster
  • ping some external site

Name resolution works fast and ping succeeds every time.

component version
OS Ubuntu 16.04.6
docker 19.03.1 (docker-ce, docker-ce-cli)
k8s 1.12.3-rancher1-1 (ok); 1.16.4-rancher1-1 (failed), 1.17.5-rancher1-1 (failed)
rke 1.0.8
kubectl 1.16.1

We face a similar problem. Some nodes of our cluster are located in a different network segment. Beause fo the issue we are sticked to k8s version < “1.13.x-rancher1-1”.

Does anybody can help?

If this consistently breaking in a new k8s release, probably the auto detection on kubelet start is different or something in CNI has changed. Please share the kubelet and CNI pod container logs from a working and non working version, that’s probably the fastest way to diagnose.

Hi Superseb,

thanks for the reply.

I couldn’t find any upload function here and posting about 1,3 Megabytes of logs directly is rather messy. Therefore you can find all logs here:

If you need more logs and information, just let me know.