Hello everyone,
on our dev worker server managed through Rancher 2.0.8, we are experiencing a considerably high number of opened scokets by the docker.d process:
Sockets
netstat -x | grep docker | wc -l
17147
FDs
ls -l /proc/118416/fd |wc -l
17392
On this environment are present few applications:
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-cluster-agent-77b896b7dd-8w76j 1/1 Running 6 109d
cattle-system cattle-node-agent-q8mfk 1/1 Running 6 109d
devops auth-ms-1-846f4c8996-hmw4c 1/1 Running 1 14d
devops auth-ms-devops-76dc9c7b4d-m92fj 1/1 Running 0 11d
ingress-nginx default-http-backend-67d6f7fc4d-44rmz 1/1 Running 6 109d
ingress-nginx nginx-ingress-controller-7gkmz 1/1 Running 6 109d
jasperreports pmaria-0 1/1 Running 0 11d
kube-system canal-n5kzp 3/3 Running 18 109d
kube-system kube-dns-8f6685677-7g266 3/3 Running 18 109d
kube-system kube-dns-autoscaler-7bdc9685df-thdbj 1/1 Running 6 109d
progetto-test security-gate-6bcf4b6f5-spv4c 1/1 Running 1 14d
servizi-educativi suse-fe-bo-8665f9c88-phvk8 1/1 Running 0 1d
servizi-educativi suse-interoperabilita-5b795467f4-8t9qz 1/1 Running 0 1d
servizi-educativi suse-intranet-5cc8767cdd-qh7vz 1/1 Running 0 1d
servizi-educativi suse-jasper-6f877c874-gjft4 1/1 Running 0 1d
servizi-educativi suse-logic-6595946cfb-9pvrc 1/1 Running 0 1d
servizi-educativi suse-portale-5f95994d56-dnkc6 1/1 Running 0 1h
And this are the system’s config:
Hostname tvl-svilkub-be1
IP Address xxx.xxx.xxx.xxx
Kubelet Version v1.10.1
Kube Proxy Version v1.10.1
Docker Version 17.3.2
Kernel Version 3.10.0-862.14.4.el7.x86_64
Operating System CentOS Linux 7
I have a few questions:
What is an acceptable open socket limit ? It’s possible to close some of those ?
And there is some kind of bug or problem ?
Thank you
Bye
Gianluca