K3S - single node cluster - Network Issue - Inter pod IP packet failure across all the pod

I created a single node k3s cluster. But Inter pod IP packet failed to get delivered across all the pod. One thing noticed is that the ARP addresses are available inside the pods. The issue is specific to IP packets.

Cluster creation command - curl -sfL https://get.k3s.io | K3S_TOKEN=$K3S_TOKEN sh -s - server --cluster-init

# kubectl exec -ti dns-test -- sh
/ # arp -a
? (10.42.0.1) at 0a:43:b2:fe:8d:af [ether]  on eth0
? (10.42.0.3) at 96:e9:10:9b:a6:8b [ether]  on eth0
root@einps001:/arch-sol/img-capt-deployment/k8s/setup# k get pods -A -o wide
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE    IP          NODE                  NOMINATED NODE   READINESS GATES
default       dns-test                                  1/1     Running     0          101m   10.42.0.9   einps-arch-node-001   <none>           <none>
kube-system   coredns-6799fbcd5-5r45t                   1/1     Running     0          102m   10.42.0.3   einps-arch-node-001   <none>           <none>
kube-system   helm-install-traefik-4k8dp                0/1     Completed   1          102m   10.42.0.2   einps-arch-node-001   <none>           <none>
kube-system   helm-install-traefik-crd-nmltf            0/1     Completed   0          102m   10.42.0.5   einps-arch-node-001   <none>           <none>
kube-system   local-path-provisioner-84db5d44d9-pkv97   1/1     Running     0          102m   10.42.0.6   einps-arch-node-001   <none>           <none>
kube-system   metrics-server-67c658944b-pvxvs           1/1     Running     0          102m   10.42.0.4   einps-arch-node-001   <none>           <none>
kube-system   svclb-traefik-4e9aaac7-pxmqg              2/2     Running     0          102m   10.42.0.7   einps-arch-node-001   <none>           <none>
kube-system   traefik-f4564c4f4-4zsd4                   1/1     Running     0          102m   10.42.0.8   einps-arch-node-001   <none>           <none>
root@einps001:/arch-sol/img-capt-deployment/k8s/setup# kubectl exec -ti dns-test -- sh
/ # nslookup kube-dns.kube-system.svc.cluster.local 10.42.0.3
;; communications error to 10.42.0.3#53: timed out
^C
/ # ping 10.42.0.3
PING 10.42.0.3 (10.42.0.3): 56 data bytes
^C
--- 10.42.0.3 ping statistics ---
11 packets transmitted, 0 packets received, 100% packet loss
/ # traceroute 10.42.0.3
traceroute to 10.42.0.3 (10.42.0.3), 30 hops max, 46 byte packets
 1  *  *  *

Please guide me on how to debug this.

@dmathai Hi and welcome to the Forum :smile:
Does this thread help shed some light on your issue? https://forums.rancher.com/t/k3s-dns-resolution-failure/39091/2

Thank you for quick reply.

Looks like --flannel-backend=ipsec is Deprecated and will be removed in v1.27.0

Did some experiments with dropwatch. The dropwatch says there are packet dropped by “nf_hook_slow”(nf_hook_slow is the interface between networking code and netfilter).

So checked the rules in iptable, ebtables, nftables etc Netfilter - Wikipedia. And found that below rule was causing problem

nft list table ip filter

table ip filter {
chain FORWARD {
type filter hook forward priority filter; policy drop;

Changed “drop” to “accept” and it started working