Kubectl exec| log get "Error from server:"

All, have been running a Rancher 2.5.7 cluster in Digital Ocean for about 6 months. About two weeks ago, I could no longer ‘kubectl exec’ ‘kubectl logs’ anything. I would get the error ’ Error from server:’ This all. Everything else works, exec from Rancher, kubectl from Rancher

Using -v=7

$ kubectl -v=7 -n xxxxx exec -it bastion-69985dfb6f-p7br9 -- bash
I0131 16:23:07.380409  604221 loader.go:359] Config loaded from file /home/xxxx/.kube/config
I0131 16:23:07.380874  604221 round_trippers.go:415] GET https://rancher.xxxx.com/k8s/clusters/c-pmvm4/api/v1/namespaces/xxxxx/pods/bastion-69985dfb6f-p7br9
I0131 16:23:07.380888  604221 round_trippers.go:422] Request Headers:
I0131 16:23:07.380896  604221 round_trippers.go:426]     Accept: application/json, */*
I0131 16:23:07.380914  604221 round_trippers.go:426]     User-Agent: kubectl/v1.10.0+b3b92b2 (linux/amd64) kubernetes/b3b92b2
I0131 16:23:07.380929  604221 round_trippers.go:426]     Authorization: Bearer <masked>
I0131 16:23:07.566998  604221 round_trippers.go:441] Response Status: 200 OK in 186 milliseconds
I0131 16:23:07.590936  604221 round_trippers.go:415] POST https://rancher.xxxxx.com/k8s/clusters/c-pmvm4/api/v1/namespaces/xxxxx/pods/bastion-69985dfb6f-p7br9/exec?command=ls&container=bastion&container=bastion&stdin=true&stdout=true&tty=true
I0131 16:23:07.591043  604221 round_trippers.go:422] Request Headers:
I0131 16:23:07.591099  604221 round_trippers.go:426]     X-Stream-Protocol-Version: v4.channel.k8s.io
I0131 16:23:07.591141  604221 round_trippers.go:426]     X-Stream-Protocol-Version: v3.channel.k8s.io
I0131 16:23:07.591180  604221 round_trippers.go:426]     X-Stream-Protocol-Version: v2.channel.k8s.io
I0131 16:23:07.591219  604221 round_trippers.go:426]     X-Stream-Protocol-Version: channel.k8s.io
I0131 16:23:07.591257  604221 round_trippers.go:426]     User-Agent: kubectl/v1.10.0+b3b92b2 (linux/amd64) kubernetes/b3b92b2
I0131 16:23:07.591303  604221 round_trippers.go:426]     Authorization: Bearer <masked>I0131 16:23:07.657109  604221 round_trippers.go:441] Response Status: 403 Forbidden in 65 milliseconds
I0131 16:23:07.658629  604221 helpers.go:201] server response object: [{
  "metadata": {}
}]
F0131 16:23:07.658715  604221 helpers.go:119] Error from server: 

Not sure where to start, what service to restart

Went to:

https://rancher.xxxxx.com/k8s/clusters/c-pmvm4/api/v1/namespaces/xxxxx/pods/bastion-69985dfb6f-p7br9/exec?command=ls&container=bastion&container=bastion&stdin=true&stdout=true&tty=true

in a browser, got this

{

    "kind": "Status",
    "apiVersion": "v1",
    "metadata": { },
    "status": "Failure",
    "message": "Upgrade request required",
    "reason": "BadRequest",
    "code": 400

}

Upgraded to latest kubectl

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:02:01Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1

logs work, still no exec, biggere error messages

$ kubectl -v=7 -n xxxxx exec -it bastion-69985dfb6f-p7br9 -- ls
I0131 16:47:55.296761  607121 loader.go:372] Config loaded from file:  /home/xxxxx/.kube/config
I0131 16:47:55.370978  607121 round_trippers.go:463] GET https://rancher.xxxxx.com/k8s/clusters/c-pmvm4/api/v1/namespaces/xxxxx/pods/bastion-69985dfb6f-p7br9
I0131 16:47:55.371038  607121 round_trippers.go:469] Request Headers:
I0131 16:47:55.371077  607121 round_trippers.go:473]     Accept: application/json, */*
I0131 16:47:55.371106  607121 round_trippers.go:473]     User-Agent: kubectl/v1.23.3 (linux/amd64) kubernetes/816c97a
I0131 16:47:55.371143  607121 round_trippers.go:473]     Authorization: Bearer <masked>
I0131 16:47:55.671054  607121 round_trippers.go:574] Response Status: 200 OK in 299 milliseconds
I0131 16:47:55.678723  607121 podcmd.go:88] Defaulting container name to bastion
I0131 16:47:55.679888  607121 round_trippers.go:463] POST https://rancher.xxxxx.com/k8s/clusters/c-pmvm4/api/v1/namespaces/xxxxx/pods/bastion-69985dfb6f-p7br9/exec?command=ls&container=bastion&stdin=true&stdout=true&tty=true
I0131 16:47:55.679989  607121 round_trippers.go:469] Request Headers:
I0131 16:47:55.680038  607121 round_trippers.go:473]     Authorization: Bearer <masked>
I0131 16:47:55.680072  607121 round_trippers.go:473]     X-Stream-Protocol-Version: v4.channel.k8s.io
I0131 16:47:55.680101  607121 round_trippers.go:473]     X-Stream-Protocol-Version: v3.channel.k8s.io
I0131 16:47:55.680145  607121 round_trippers.go:473]     X-Stream-Protocol-Version: v2.channel.k8s.io
I0131 16:47:55.680195  607121 round_trippers.go:473]     X-Stream-Protocol-Version: channel.k8s.io
I0131 16:47:55.680239  607121 round_trippers.go:473]     User-Agent: kubectl/v1.23.3 (linux/amd64) kubernetes/816c97a
I0131 16:47:55.736041  607121 round_trippers.go:574] Response Status: 403 Forbidden in 55 milliseconds
I0131 16:47:55.737221  607121 helpers.go:219] server response object: [{
  "metadata": {}
}]
F0131 16:47:55.737303  607121 helpers.go:118] Error from server: 
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x307e000, 0x3, 0x0, 0xc0006bebd0, 0x2, {0x25f1467, 0x10}, 0xc000580000, 0x0)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc0006e3278, 0x13, 0x0, {0x0, 0x0}, 0x0, {0xc0005e54f0, 0x1, 0x1})
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc0006e3278, 0x13}, 0xc000229400)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1fe9f80, 0xc000229400}, 0x1e781d0)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:191 +0x7d7
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/exec.NewCmdExec.func1(0xc000704780, {0xc0004b5030, 0x2, 0x7})
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/exec/exec.go:97 +0xc5
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000704780, {0xc0004b4fc0, 0x7, 0x7})
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00047bb80)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc00047bb80)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:146 +0x325
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.RunNoErrOutput(...)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:84
main.main()
  _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:30 +0x1e

goroutine 18 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1181 +0x6a
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xfb

goroutine 6 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1fea280, 0xc0006c0000}, 0x1, 0xc000114360)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x13b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x12a05f200, 0x0, 0x0, 0x0)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x0, 0x0)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x28
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:179 +0x85

goroutine 131 [IO wait]:
internal/poll.runtime_pollWait(0x7fa1dcf0a058, 0x72)
  /usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc0001b5200, 0xc000570000, 0x0)
  /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
  /usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0001b5200, {0xc000570000, 0x2c42, 0x2c42})
  /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0001b5200, {0xc000570000, 0xc000570005, 0x1491})
  /usr/local/go/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc0005d6010, {0xc000570000, 0x4fab7e, 0xc0005a17f8})
  /usr/local/go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00089d590, {0xc000570000, 0x0, 0x409e2d})
  /usr/local/go/src/crypto/tls/conn.go:777 +0x3d
bytes.(*Buffer).ReadFrom(0xc0007d2278, {0x1fe80c0, 0xc00089d590})
  /usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0007d2000, {0x1febaa0, 0xc0005d6010}, 0x0)
  /usr/local/go/src/crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0007d2000, 0x0)
  /usr/local/go/src/crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
  /usr/local/go/src/crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc0007d2000, {0xc0001a6000, 0x1000, 0x1})
  /usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
net/http.(*persistConn).Read(0xc00014a360, {0xc0001a6000, 0xc0001140c0, 0xc0005a1d30})
  /usr/local/go/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc000091bc0)
  /usr/local/go/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc000091bc0, 0x1)
  /usr/local/go/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc00014a360)
  /usr/local/go/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
  /usr/local/go/src/net/http/transport.go:1747 +0x1e05

goroutine 132 [select]:
net/http.(*persistConn).writeLoop(0xc00014a360)
  /usr/local/go/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
  /usr/local/go/src/net/http/transport.go:1748 +0x1e65

goroutine 136 [syscall]:
os/signal.signal_recv()
  /usr/local/go/src/runtime/sigqueue.go:169 +0x98
os/signal.loop()
  /usr/local/go/src/os/signal/signal_unix.go:24 +0x19
created by os/signal.Notify.func1.1
  /usr/local/go/src/os/signal/signal.go:151 +0x2c

@Andrew_Prowse Hi, do u resolve this problem? Unfortunatly, we meet the same thing, we could not execute shell on any pods.
@superseb Hi, sorry to bother u. Could u give some attention on this issue ?

Resolved. We deployed an app used 10010. We change the 10010. It is all good now.