Problems with last update (kubernetes 1.8.9)

Hi

after the last updates, I have a big problem with kube-dns. One of the containers (dnsmasq) fails to run end restarts (crashloopbackoff) all the time (log say: Could not start dnsmasq with initial configuration: fork/exec /usr/sbin/dnsmasq: exec format error). If I try to reinstall from scratch (with all the patches prior of the cluster setup), its the same!
The only solution was a rollback to my old setup (started with the version with 1.7.7 and patches till the change to 1.8.7) via restore from snapshot … This config works without this problems

Greetings

Frank

Frank, could you check the dnsmasq binary with “file”, please? One way to do it is:

 $ docker run --entrypoint="cat" $name_of_kubedns_image /usr/sbin/dnsmasq | file -

Please report this to our support organization as well.

Or run something like:

kubectl -n kube-system exec -it kube-dns-1144198277-87cp4 -c dnsmasq bash bash-4.3# file /usr/sbin/dnsmasq

And also:

docker inspect $the_name_of_the_image

Please also report the version of these packages:

rpm -qa | grep dns

[QUOTE=a_jaeger;51793]Or run something like:

kubectl -n kube-system exec -it kube-dns-1144198277-87cp4 -c dnsmasq bash bash-4.3# file /usr/sbin/dnsmasq

kubectl -n kube-system exec -it kube-dns-64d64759fd-ndspp -c dnsmasq bash
error: unable to upgrade connection: container not found (“dnsmasq”)

→ the dnsmasq container could not start

And also:

docker inspect $the_name_of_the_image

dedavk8sw97:~ # docker inspect k8s_dnsmasq_kube-dns-64d64759fd-ndspp_kube-system_fc5ca881-31b1-11e8-ae67-000c29c6f1d9_16
[
{
“Id”: “0a2d88f2888eca2e0a1ee205c55a324443ac6013d668fa5bfb6ebd5ab00bba6a”,
“Created”: “2018-03-27T12:26:16.218689894Z”,
“Path”: “/usr/bin/dnsmasq-nanny”,
“Args”: [
“-v=2”,
“-logtostderr”,
“-configDir=/etc/k8s/dns/dnsmasq-nanny”,
“-restartDnsmasq=true”,
“–”,
“-k”,
“–cache-size=1000”,
“–log-facility=-”,
“–server=/cluster.local/127.0.0.1#10053”,
“–server=/in-addr.arpa/127.0.0.1#10053”,
“–server=/ip6.arpa/127.0.0.1#10053”
],
“State”: {
“Status”: “exited”,
“Running”: false,
“Paused”: false,
“Restarting”: false,
“OOMKilled”: false,
“Dead”: false,
“Pid”: 0,
“ExitCode”: 255,
“Error”: “”,
“StartedAt”: “2018-03-27T12:26:16.412174132Z”,
“FinishedAt”: “2018-03-27T12:26:16.919053448Z”
},
“Image”: “sha256:2a407a9181c0e7337bdb23042f6535c051c4061b1b430009532d2edc35cc3474”,
“ResolvConfPath”: “/var/lib/docker/containers/2d0922c5a43f472f5fc2e576e97bed307292ec2c88b581770ca503af425f6f54/resolv.conf”,
“HostnamePath”: “/var/lib/docker/containers/2d0922c5a43f472f5fc2e576e97bed307292ec2c88b581770ca503af425f6f54/hostname”,
“HostsPath”: “/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/etc-hosts”,
“LogPath”: “/var/lib/docker/containers/0a2d88f2888eca2e0a1ee205c55a324443ac6013d668fa5bfb6ebd5ab00bba6a/0a2d88f2888eca2e0a1ee205c55a324443ac6013d668fa5bfb6ebd5ab00bba6a-json.log”,
“Name”: “/k8s_dnsmasq_kube-dns-64d64759fd-ndspp_kube-system_fc5ca881-31b1-11e8-ae67-000c29c6f1d9_16”,
“RestartCount”: 0,
“Driver”: “btrfs”,
“MountLabel”: “”,
“ProcessLabel”: “”,
“AppArmorProfile”: “”,
“ExecIDs”: null,
“HostConfig”: {
“Binds”: [
“/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/volumes/kubernetes.io~configmap/kube-dns-config:/etc/k8s/dns/dnsmasq-nanny:ro”,
“/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/volumes/kubernetes.io~secret/kube-dns-token-ppmj4:/var/run/secrets/kubernetes.io/serviceaccount:ro”,
“/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/etc-hosts:/etc/hosts”,
“/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/containers/dnsmasq/321cb350:/dev/termination-log”
],
“ContainerIDFile”: “”,
“LogConfig”: {
“Type”: “json-file”,
“Config”: {}
},
“NetworkMode”: “container:2d0922c5a43f472f5fc2e576e97bed307292ec2c88b581770ca503af425f6f54”,
“PortBindings”: null,
“RestartPolicy”: {
“Name”: “”,
“MaximumRetryCount”: 0
},
“AutoRemove”: false,
“VolumeDriver”: “”,
“VolumesFrom”: null,
“CapAdd”: null,
“CapDrop”: null,
“Dns”: null,
“DnsOptions”: null,
“DnsSearch”: null,
“ExtraHosts”: null,
“GroupAdd”: null,
“IpcMode”: “container:2d0922c5a43f472f5fc2e576e97bed307292ec2c88b581770ca503af425f6f54”,
“Cgroup”: “”,
“Links”: null,
“OomScoreAdj”: 998,
“PidMode”: “”,
“Privileged”: false,
“PublishAllPorts”: false,
“ReadonlyRootfs”: false,
“SecurityOpt”: [
“seccomp=unconfined”
],
“UTSMode”: “”,
“UsernsMode”: “”,
“ShmSize”: 67108864,
“Runtime”: “runc”,
“ConsoleSize”: [
0,
0
],
“Isolation”: “”,
“CpuShares”: 153,
“Memory”: 0,
“CgroupParent”: “/kubepods/burstable/podfc5ca881-31b1-11e8-ae67-000c29c6f1d9”,
“BlkioWeight”: 0,
“BlkioWeightDevice”: null,
“BlkioDeviceReadBps”: null,
“BlkioDeviceWriteBps”: null,
“BlkioDeviceReadIOps”: null,
“BlkioDeviceWriteIOps”: null,
“CpuPeriod”: 0,
“CpuQuota”: 0,
“CpusetCpus”: “”,
“CpusetMems”: “”,
“Devices”: ,
“DiskQuota”: 0,
“KernelMemory”: 0,
“MemoryReservation”: 0,
“MemorySwap”: 0,
“MemorySwappiness”: -1,
“OomKillDisable”: false,
“PidsLimit”: 0,
“Ulimits”: null,
“CpuCount”: 0,
“CpuPercent”: 0,
“IOMaximumIOps”: 0,
“IOMaximumBandwidth”: 0
},
“GraphDriver”: {
“Name”: “btrfs”,
“Data”: null
},
“Mounts”: [
{
“Source”: “/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/volumes/kubernetes.io~configmap/kube-dns-config”,
“Destination”: “/etc/k8s/dns/dnsmasq-nanny”,
“Mode”: “ro”,
“RW”: false,
“Propagation”: “rprivate”
},
{
“Source”: “/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/volumes/kubernetes.io~secret/kube-dns-token-ppmj4”,
“Destination”: “/var/run/secrets/kubernetes.io/serviceaccount”,
“Mode”: “ro”,
“RW”: false,
“Propagation”: “rprivate”
},
{
“Source”: “/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/etc-hosts”,
“Destination”: “/etc/hosts”,
“Mode”: “”,
“RW”: true,
“Propagation”: “rprivate”
},
{
“Source”: “/var/lib/kubelet/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/containers/dnsmasq/321cb350”,
“Destination”: “/dev/termination-log”,
“Mode”: “”,
“RW”: true,
“Propagation”: “rprivate”
}
],
“Config”: {
“Hostname”: “kube-dns-64d64759fd-ndspp”,
“Domainname”: “”,
“User”: “0”,
“AttachStdin”: false,
“AttachStdout”: false,
“AttachStderr”: false,
“Tty”: false,
“OpenStdin”: false,
“StdinOnce”: false,
“Env”: [
“KUBE_DNS_SERVICE_PORT_DNS_TCP=53”,
“TILLER_SERVICE_PORT=44134”,
“TILLER_SERVICE_PORT_TILLER=44134”,
“DEX_PORT_5556_TCP=tcp://172.24.8.200:5556”,
“KUBERNETES_PORT_443_TCP_PROTO=tcp”,
“KUBE_DNS_PORT=udp://172.24.0.2:53”,
“KUBERNETES_PORT_443_TCP_ADDR=172.24.0.1”,
“KUBE_DNS_SERVICE_HOST=172.24.0.2”,
“TILLER_PORT_44134_TCP_PORT=44134”,
“DEX_PORT_5556_TCP_PROTO=tcp”,
“DEX_PORT_5556_TCP_ADDR=172.24.8.200”,
“KUBERNETES_SERVICE_PORT=443”,
“KUBERNETES_PORT_443_TCP=tcp://172.24.0.1:443”,
“KUBE_DNS_PORT_53_TCP_ADDR=172.24.0.2”,
“TILLER_PORT_44134_TCP_ADDR=172.24.53.240”,
“DEX_PORT=tcp://172.24.8.200:5556”,
“KUBERNETES_PORT_443_TCP_PORT=443”,
“KUBE_DNS_PORT_53_UDP_PORT=53”,
“KUBE_DNS_PORT_53_UDP_ADDR=172.24.0.2”,
“KUBERNETES_SERVICE_HOST=172.24.0.1”,
“KUBE_DNS_PORT_53_TCP=tcp://172.24.0.2:53”,
“TILLER_PORT=tcp://172.24.53.240:44134”,
“TILLER_PORT_44134_TCP_PROTO=tcp”,
“DEX_PORT_5556_TCP_PORT=5556”,
“TILLER_SERVICE_HOST=172.24.53.240”,
“TILLER_PORT_44134_TCP=tcp://172.24.53.240:44134”,
“DEX_SERVICE_PORT=5556”,
“KUBERNETES_SERVICE_PORT_HTTPS=443”,
“KUBERNETES_PORT=tcp://172.24.0.1:443”,
“KUBE_DNS_SERVICE_PORT=53”,
“KUBE_DNS_PORT_53_UDP=udp://172.24.0.2:53”,
“KUBE_DNS_PORT_53_TCP_PROTO=tcp”,
“DEX_SERVICE_PORT_DEX=5556”,
“KUBE_DNS_SERVICE_PORT_DNS=53”,
“KUBE_DNS_PORT_53_UDP_PROTO=udp”,
“KUBE_DNS_PORT_53_TCP_PORT=53”,
“DEX_SERVICE_HOST=172.24.8.200”
],
“Cmd”: [
“-v=2”,
“-logtostderr”,
“-configDir=/etc/k8s/dns/dnsmasq-nanny”,
“-restartDnsmasq=true”,
“–”,
“-k”,
“–cache-size=1000”,
“–log-facility=-”,
“–server=/cluster.local/127.0.0.1#10053”,
“–server=/in-addr.arpa/127.0.0.1#10053”,
“–server=/ip6.arpa/127.0.0.1#10053”
],
“Healthcheck”: {
“Test”: [
“NONE”
]
},
“Image”: “sha256:2a407a9181c0e7337bdb23042f6535c051c4061b1b430009532d2edc35cc3474”,
“Volumes”: null,
“WorkingDir”: “”,
“Entrypoint”: [
“/usr/bin/dnsmasq-nanny”
],
“OnBuild”: null,
“Labels”: {
“annotation.io.kubernetes.container.hash”: “6d878ec9”,
“annotation.io.kubernetes.container.ports”: “[{"name":"dns","containerPort":53,"protocol":"UDP"},{"name":"dns-tcp","containerPort":53,"protocol":"TCP"}]”,
“annotation.io.kubernetes.container.restartCount”: “16”,
“annotation.io.kubernetes.container.terminationMessagePath”: “/dev/termination-log”,
“annotation.io.kubernetes.container.terminationMessagePolicy”: “File”,
“annotation.io.kubernetes.pod.terminationGracePeriod”: “30”,
“io.kubernetes.container.logpath”: “/var/log/pods/fc5ca881-31b1-11e8-ae67-000c29c6f1d9/dnsmasq_16.log”,
“io.kubernetes.container.name”: “dnsmasq”,
“io.kubernetes.docker.type”: “container”,
“io.kubernetes.pod.name”: “kube-dns-64d64759fd-ndspp”,
“io.kubernetes.pod.namespace”: “kube-system”,
“io.kubernetes.pod.uid”: “fc5ca881-31b1-11e8-ae67-000c29c6f1d9”,
“io.kubernetes.sandbox.id”: “2d0922c5a43f472f5fc2e576e97bed307292ec2c88b581770ca503af425f6f54”,
“org.openbuildservice.disturl”: “‘obs://build.suse.de/SUSE:Maintenance:6053/SUSE_SLE-12-SP3_Update_Products_CASP20_Update_images_container_base/cb28fd0ab0e2bcf3448e33ceadd5f72e-sles12sp3-dnsmasq-nanny-image.SUSE_SLE-12-SP3_Update_Products_CASP20_Update’”
}
},
“NetworkSettings”: {
“Bridge”: “”,
“SandboxID”: “”,
“HairpinMode”: false,
“LinkLocalIPv6Address”: “”,
“LinkLocalIPv6PrefixLen”: 0,
“Ports”: null,
“SandboxKey”: “”,
“SecondaryIPAddresses”: null,
“SecondaryIPv6Addresses”: null,
“EndpointID”: “”,
“Gateway”: “”,
“GlobalIPv6Address”: “”,
“GlobalIPv6PrefixLen”: 0,
“IPAddress”: “”,
“IPPrefixLen”: 0,
“IPv6Gateway”: “”,
“MacAddress”: “”,
“Networks”: null
}
}
]

Please also report the version of these packages:

rpm -qa | grep dns

dedavk8sw97:~ # rpm -qa | grep dns
sles12-dnsmasq-nanny-image-2.0.1-2.3.15.x86_64
sles12-kubedns-image-2.0.1-2.3.11.x86_64
python-dnspython-1.14.0-2.8.noarch
libadns1-1.4-101.65.x86_64

Thanks for your help!

Frank

Frank, the versions look fine. Strange - please open a support request with support config attached.

Hi,

it seems, that they have fixed the problem with one of the last patches … On thursday, I have done 2 successfull installations (each 1x admin, 3 x master, 3 x worker) on exact the same nodes (IP, names etc all the same) without any problems (no crashing dns and no problem with the dex pods).

Best regards

Frank

[QUOTE=FrankIhringer;51773]Hi

after the last updates, I have a big problem with kube-dns. One of the containers (dnsmasq) fails to run end restarts (crashloopbackoff) all the time (log say: Could not start dnsmasq with initial configuration: fork/exec /usr/sbin/dnsmasq: exec format error). If I try to reinstall from scratch (with all the patches prior of the cluster setup), its the same!
The only solution was a rollback to my old setup (started with the version with 1.7.7 and patches till the change to 1.8.7) via restore from snapshot … This config works without this problems

Greetings

Frank[/QUOTE]

I have the same problem on a CAAS 3.0 freshly installed system.

Could you open a support request with log files, please? We need to investigate this and support request is the best way.