Rancher Execute Shell disconnected

Hi all,

I have deployed Rancher 2.8.5 using HELM in my Kubernetes 1.25.12 Cluster running on OpenStack. So far everything seems to be working very well. I manged to deploy K8s clusters in Vsphere too.

But there seems to be a (timeout) problem. Whenver I open a shell to a pod using

kubectl exec

or through the WebUI

it closes very soon. This imposes a problem when running longer tasks.

I found quite a few articles about it being an issue with the ingress-nginx but no real solution.

Can you please help?

Cheers

i see same error on rancher desktop

Hi,

thanks. That let’s me know that the issue is not related to my OpenStack → K8s setup.

I have not used Rancher Desktop, so I don’t know if this also uses ingress-nginx, but I tried debugging this a bit further. I opened a shell via the Rancher WebUI to gitlab toolbox. I get disconnected after ~ 60 seconds and I can see this in the ingress-nginx pod logs:

0809 08:48:29.444660       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-64b5dcb789-2l2jv", UID:"3b45a885-fd18-401f-a1f2-7e5b37532229", APIVersion:"v1"
10.0.0.3 - - [09/Aug/2024:08:48:29 +0000] "GET /v3/connect HTTP/1.1" 101 121702 "-" "Go-http-client/1.1" 2651 289.973 [cattle-system-rancher-80] [] 10.100.33.177:80 0 289.973 101 725f95807cda7a627e9986c61466d548       
10.100.97.192 - - [09/Aug/2024:08:48:29 +0000] "GET /k8s/clusters/c-m-9rqf2hdm/api/v1/namespaces/gitlab-prod/pods/gitlab-prod-toolbox-556bb469b6-8l4xv/exec?container=toolbox&stdout=1&stdin=1&stderr=1&tty=1&command=%2F
10.0.0.3 - - [09/Aug/2024:08:48:35 +0000] "GET /v3/users?me=true HTTP/2.0" 200 808 "https://rancher.domain.local/dashboard/c/c-m-9rqf2hdm/explorer/pod" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/201001
10.0.0.3 - - [09/Aug/2024:08:48:39 +0000] "GET /v3/connect/config HTTP/2.0" 404 0 "-" "Go-http-client/2.0" 2065 0.002 [cattle-system-rancher-80] [] 10.100.33.177:80 0 0.002 404 abd83ebd140b902b5088ca65594982d4

To me this looks like a timeout setting. But when I check the ingress I can see very high default values:

kubectl get ingress rancher -n cattle-system -o yaml
....
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3000"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"

Is there maybe someone else with the same issue?

Cheers,
Oliver