K3s dns resolution failure

I’m trying to configure k3s on my NVIDIA Jetson AGX Xavier
Environmental Info:
k3s version v1.24.4+k3s1 (c3f830e)
go version go1.18.1

Node(s) CPU architecture, OS, and Version:
Linux ubuntu 4.9.253-tegra SMP PREEMPT Sun Apr 17 02:37:44 PDT 2022 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:
1 server

The issue is that k3s can not resolve DNS resolution

Steps To Reproduce:
curl -sfL https://get.k3s.io | sh -s - server --write-kubeconfig-mode 644 --cluster-init

Check if dns resolutions works as follows

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup kubernetes.default

And get:

If you don't see a command prompt, try pressing enter.
Address 1: 10.43.0.10

nslookup: can't resolve 'kubernetes.default'
pod "busybox" deleted
pod default/busybox terminated (Error)

Check the coredns logs

for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
.:53
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9000e4afd6c1108d
CoreDNS-1.9.1
linux/arm64, go1.17.8, 4b597f8
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:57689->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:53306->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:38198->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:34448->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:39717->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:54180->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:35919->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:38218->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:40765->8.8.8.8:53: i/o timeout
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[ERROR] plugin/errors: 2 4454268915698512202.3377740739522558627. HINFO: read udp 10.42.0.4:45094->8.8.8.8:53: i/o timeout
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server

This is my config map of core-dns:

kubectl -n kube-system get cm coredns -o yaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    import /etc/coredns/custom/*.server
  NodeHosts: |
    192.168.0.103 ubuntu
kind: ConfigMap
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQjtdZHTuafAlCgq488xUSi9wK2AybEFDXvhwR2e8QQFHCnh50ZkloTJCcf8lP6NTIqUyuCkNJiSp9LJP5czoLjryztTWB0uE2iYmvjFuVSFenJsHx6tFf41gvGY6Y0Eshz/9D2e0OSZfIJVvMZExwzusSf/I9SIcQQNvaG6a+r/XVdV7abBddPtsN9W66Eedi0N7aberM22zaHf6t0tcPsIAAD//8Ix+PfoAQAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: coredns
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2022-09-23T09:06:05Z"
  labels:
    objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57
  name: coredns
  namespace: kube-system
  resourceVersion: "315"
  uid: 33a8ccf6-511f-49c4-9752-424859d67d70

Check pods:

kubectl -n kube-system get po -o wide

Output:

coredns-b96499967-sct84                   1/1     Running     1 (17h ago)   20h   10.42.0.6   ubuntu   <none>           <none>
helm-install-traefik-crd-wrh5b            0/1     Completed   0             20h   10.42.0.3   ubuntu   <none>           <none>
helm-install-traefik-wx7s2                0/1     Completed   1             20h   10.42.0.5   ubuntu   <none>           <none>
local-path-provisioner-7b7dc8d6f5-qxjvs   1/1     Running     1 (17h ago)   20h   10.42.0.3   ubuntu   <none>           <none>
metrics-server-668d979685-ngbmr           1/1     Running     1 (17h ago)   20h   10.42.0.5   ubuntu   <none>           <none>
svclb-traefik-67fcd721-mz6sd              2/2     Running     2 (17h ago)   20h   10.42.0.2   ubuntu   <none>           <none>
traefik-7cd4fcff68-j74gd                  1/1     Running     1 (17h ago)   20h   10.42.0.4   ubuntu   <none>           <none>

Check services:

kubectl  -n kube-system get svc

Output:

NAME             TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
kube-dns         ClusterIP      10.43.0.10     <none>          53/UDP,53/TCP,9153/TCP       20h
metrics-server   ClusterIP      10.43.178.64   <none>          443/TCP                      20h
traefik          LoadBalancer   10.43.36.41    192.168.0.103   80:30268/TCP,443:30293/TCP   20h

I have a local net that was configured as follows. The internet is shared by laptop with ip 192.168.0.101

I think I have network other lap or smth like this. Any suggestions to further debug?

@kracozebr, I have this exact problem. Were you able to solve this issue.

HI, what I’ve done changed flag flannel backend, the comand I start k3s is:

curl -sfL https://get.k3s.io | sh -s - server --write-kubeconfig-mode 644 --flannel-backend=ipsec 

This worked for me.
If it not works you can choose different flannel backend see docs: Network Options | K3s