How to fix default search domain for pods in cluster?

During my migration of apps from docker to kubernetes, I ran into the issue where the apps could not resolved external hostnames, ie hosts not in the cluster.

The app tried to resolve “news.newshosting.com” and could not resolve it.

tcpdump shows it tried the cluster domain, “cluster.local” and then my home network domain “int.mydomain”, and stopped there.

sudo tcpdump -nnni eth0 udp port 53
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
15:10:29.272313 IP 192.168.1.8.47953 > 192.168.1.13.53: 52012+ A? news.newshosting.com.default.svc.cluster.local. (64)
15:10:29.272396 IP 192.168.1.8.47953 > 192.168.1.13.53: 56086+ AAAA? news.newshosting.com.default.svc.cluster.local. (64)
15:10:29.273856 IP 192.168.1.8.45574 > 192.168.1.13.53: 58221+ A? news.newshosting.com.svc.cluster.local. (56)
15:10:29.274037 IP 192.168.1.8.45574 > 192.168.1.13.53: 58647+ AAAA? news.newshosting.com.svc.cluster.local. (56)
15:10:29.275081 IP 192.168.1.8.63015 > 192.168.1.13.53: 50588+ A? news.newshosting.com.cluster.local. (52)
15:10:29.275222 IP 192.168.1.8.63015 > 192.168.1.13.53: 50922+ AAAA? news.newshosting.com.cluster.local. (52)
15:10:29.276422 IP 192.168.1.8.4183 > 192.168.1.1.53: 7356+ A? news.newshosting.com.int.mydomain. (55)
15:10:29.276448 IP 192.168.1.8.33910 > 192.168.1.13.53: 52582+ A? news.newshosting.com.int.mydomain. (55)
15:10:29.276449 IP 192.168.1.13.53 > 192.168.1.8.47953: 52012 NXDomain 0/1/0 (139)
15:10:29.276974 IP 192.168.1.8.17739 > 192.168.1.1.53: 55587+ AAAA? news.newshosting.com.int.mydomain. (55)
15:10:29.277059 IP 192.168.1.8.33910 > 192.168.1.13.53: 53008+ AAAA? news.newshosting.com.int.mydomain. (55)

Is there a way, like setting a DNSPolicy, to fix this issue? I’d like for queries for fqdn’s that are not under cluster.local to be forwarded for resolution.

my cluster-dns configmap

>kubectl describe cm -n kube-system  cluster-dns
Name:         cluster-dns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
clusterDNS:
----
10.43.0.10
clusterDomain:
----
cluster.local

BinaryData
====

Events:  <none>

my core dns configmap

kubectl describe cm -n kube-system  cluster-dns
Name:         cluster-dns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
clusterDNS:
----
10.43.0.10
clusterDomain:
----
cluster.local

BinaryData
====

Events:  <none>
jgooch@Prometheus ~ % kubectl describe cm -n kube-system coredns     
Name:         coredns
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=bce283298811743a0386ab510f2f67ef74240c57
Annotations:  objectset.rio.cattle.io/applied:
                H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQ...
              objectset.rio.cattle.io/id: 
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: coredns
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
Corefile:
----
.:53 {
    errors
    health
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    hosts /etc/coredns/NodeHosts {
      ttl 60
      reload 15s
      fallthrough
    }
    prometheus :9153
    forward . /etc/resolv.conf
    cache 30
    loop
    reload
    loadbalance
    import /etc/coredns/custom/*.override
}
import /etc/coredns/custom/*.server

NodeHosts:
----
192.168.1.8 node01
192.168.1.9 node02
192.168.1.10 node03


BinaryData
====

Events:  <none>

I did find what might be a work around

      dnsConfig:
        nameservers:
          - (internal dns server)
        options:
          - name: ndots
            value: "1"

The above has worked on one app, I’m testing it on others atm. But I’d rather have a cluster wide configuration that fixes it rather than configuring DNS on every app.

Let me know if you need addition information. I didn’t see dnspolicy in the output, I’ll provide that if I can find it.

Update on the work around. I didn’t need to specify nameserver. “ndots” was sufficient.

dnsConfig:
        options:
          - name: ndots
            value: "1"