Longhorn manager occasionally stop working

Hello everyone, can someone help to troubleshoot why longhorn manager occasionally stop working on one of the cluster node. Log from longhorn manager is below:

>10.42.10.126 - - [22/Feb/2023:10:18:53 +0000] "GET /metrics HTTP/1.1" 200 1127 "" "Prometheus/2.27.1"
10.42.9.152 - - [22/Feb/2023:10:19:26 +0000] "GET /v1/volumes/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e HTTP/1.1" 200 7570 "" "Go-http-client/1.1"
10.42.9.152 - - [22/Feb/2023:10:19:40 +0000] "GET /v1/volumes/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 HTTP/1.1" 200 7568 "" "Go-http-client/1.1"
10.42.10.126 - - [22/Feb/2023:10:19:53 +0000] "GET /metrics HTTP/1.1" 200 1129 "" "Prometheus/2.27.1"
10.42.10.126 - - [22/Feb/2023:10:20:55 +0000] "GET /metrics HTTP/1.1" 200 955 "" "Prometheus/2.27.1"
E0222 10:23:13.206813       1 instance_manager_controller.go:1173] failed to poll instance info to update instance manager instance-manager-r-c73a27d6: failed to list processes: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.42.9.214:8500: i/o timeout"
E0222 10:23:13.212555       1 reflector.go:383] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1beta1.InstanceManager: Get "https://10.43.0.1:443/apis/longhorn.io/v1beta1/instancemanagers?allowWatchBookmarks=true&resourceVersion=217349626&timeout=9m4s&timeoutSeconds=544&watch=true": http2: client connection lost
time="2023-02-22T10:23:13Z" level=warning msg="error during scrape" collector=node error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/nodes?fieldSelector=metadata.name%3Dgetlab-run-int02\": http2: client connection lost" node=getlab-run-int02
W0222 10:23:13.217358       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217373       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImage ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217414       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217407       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackupTarget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217526       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217579       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217424       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.DaemonSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217617       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.CronJob ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
time="2023-02-22T10:23:13Z" level=warning msg="error during scrape" collector=manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?fieldSelector=metadata.name%3Dlonghorn-manager-t5slx\": http2: client connection lost" node=getlab-run-int02
W0222 10:23:13.217691       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217697       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Setting ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217725       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.RecurringJob ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217523       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
time="2023-02-22T10:23:13Z" level=warning msg="error during scrape" collector=instance_manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?labelSelector=longhorn.io%2Fcomponent%3Dinstance-manager%2Clonghorn.io%2Fnode%3Dgetlab-run-int02\": http2: client connection lost" node=getlab-run-int02
W0222 10:23:13.217445       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217476       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217823       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImageManager ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217853       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Backup ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217883       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.EngineImage ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217501       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PriorityClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217718       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217765       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.ShareManager ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.217426       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.218034       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackupVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.218201       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.218213       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Replica ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:13.218895       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Engine ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:23:58.758156       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImageDataSource ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=node error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/nodes?fieldSelector=metadata.name%3Dgetlab-run-int02\": net/http: TLS handshake timeout" node=getlab-run-int02
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?fieldSelector=metadata.name%3Dlonghorn-manager-t5slx\": net/http: TLS handshake timeout" node=getlab-run-int02
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=node error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/nodes?fieldSelector=metadata.name%3Dgetlab-run-int02\": net/http: TLS handshake timeout" node=getlab-run-int02
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=instance_manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?labelSelector=longhorn.io%2Fcomponent%3Dinstance-manager%2Clonghorn.io%2Fnode%3Dgetlab-run-int02\": net/http: TLS handshake timeout" node=getlab-run-int02
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?fieldSelector=metadata.name%3Dlonghorn-manager-t5slx\": net/http: TLS handshake timeout" node=getlab-run-int02
10.42.10.126 - - [22/Feb/2023:10:22:38 +0000] "GET /metrics HTTP/1.1" 200 955 "" "Prometheus/2.27.1"
E0222 10:24:08.683799       1 instance_manager_controller.go:1173] failed to poll instance info to update instance manager instance-manager-e-28f5f113: failed to list processes: rpc error: code = DeadlineExceeded desc = context deadline exceeded
I0222 10:24:08.683895       1 trace.go:116] Trace[1324550905]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:58.43112588 +0000 UTC m=+42570.273757354) (total time: 10.252689272s):
Trace[1324550905]: [10.252623227s] [10.252623227s] Objects listed
I0222 10:24:08.683998       1 trace.go:116] Trace[1657559384]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:58.682490308 +0000 UTC m=+42570.525121819) (total time: 10.001415965s):
Trace[1657559384]: [10.001357049s] [10.001357049s] Objects listed
I0222 10:24:08.685604       1 trace.go:116] Trace[1904744141]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:45.421105938 +0000 UTC m=+42557.263737410) (total time: 23.26445612s):
Trace[1904744141]: [23.264426206s] [23.264426206s] Objects listed
I0222 10:24:08.685743       1 trace.go:116] Trace[1530417941]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:30.955196716 +0000 UTC m=+42542.797828324) (total time: 37.730515107s):
Trace[1530417941]: [37.730476254s] [37.730476254s] Objects listed
I0222 10:24:08.685989       1 trace.go:116] Trace[288687763]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:45.988593121 +0000 UTC m=+42557.831224582) (total time: 22.697307939s):
Trace[288687763]: [22.697286518s] [22.697286518s] Objects listed
I0222 10:24:08.687076       1 trace.go:116] Trace[581509272]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:31.151953858 +0000 UTC m=+42542.994585335) (total time: 37.535083561s):
Trace[581509272]: [37.534972064s] [37.534972064s] Objects listed
time="2023-02-22T10:24:08Z" level=warning msg="error during scrape" collector=instance_manager error="Get \"https://10.43.0.1:443/apis/metrics.k8s.io/v1beta1/namespaces/longhorn-system/pods?labelSelector=longhorn.io%2Fcomponent%3Dinstance-manager%2Clonghorn.io%2Fnode%3Dgetlab-run-int02\": net/http: TLS handshake timeout" node=getlab-run-int02
10.42.10.126 - - [22/Feb/2023:10:23:13 +0000] "GET /metrics HTTP/1.1" 200 955 "" "Prometheus/2.27.1"
I0222 10:24:08.722127       1 trace.go:116] Trace[519499001]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:24.27423428 +0000 UTC m=+42536.116865735) (total time: 44.447810338s):
Trace[519499001]: [44.447678706s] [44.447678706s] Objects listed
I0222 10:24:08.753488       1 trace.go:116] Trace[980887732]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:58.246901964 +0000 UTC m=+42570.089533415) (total time: 10.506490223s):
Trace[980887732]: [10.506468062s] [10.506468062s] Objects listed
I0222 10:24:08.801735       1 trace.go:116] Trace[1718182006]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:44.254292811 +0000 UTC m=+42556.096924282) (total time: 24.547363808s):
Trace[1718182006]: [24.547269701s] [24.547269701s] Objects listed
I0222 10:24:08.806559       1 trace.go:116] Trace[1736577926]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:58.278048895 +0000 UTC m=+42570.120680380) (total time: 10.528454195s):
Trace[1736577926]: [10.528421902s] [10.528421902s] Objects listed
I0222 10:24:08.807762       1 trace.go:116] Trace[51951775]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:58.236291671 +0000 UTC m=+42570.078923139) (total time: 10.571402146s):
Trace[51951775]: [10.57137078s] [10.57137078s] Objects listed
I0222 10:24:08.809589       1 trace.go:116] Trace[588323607]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:30.887840932 +0000 UTC m=+42542.730472436) (total time: 37.921701989s):
Trace[588323607]: [37.921662831s] [37.921662831s] Objects listed
I0222 10:24:08.810711       1 trace.go:116] Trace[2058614871]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:23.396401791 +0000 UTC m=+42535.239033261) (total time: 45.414256429s):
Trace[2058614871]: [45.414246948s] [45.414246948s] Objects listed
I0222 10:24:08.811507       1 trace.go:116] Trace[372129194]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:29.143643076 +0000 UTC m=+42540.986274550) (total time: 39.66781786s):
Trace[372129194]: [39.667745972s] [39.667745972s] Objects listed
I0222 10:24:08.830348       1 trace.go:116] Trace[1866763183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:30.887534369 +0000 UTC m=+42542.730165792) (total time: 37.942775705s):
Trace[1866763183]: [37.942734264s] [37.942734264s] Objects listed
I0222 10:24:10.405806       1 trace.go:116] Trace[155019938]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:26.795992778 +0000 UTC m=+42538.638624322) (total time: 43.609716405s):
Trace[155019938]: [43.48224098s] [43.48224098s] Objects listed
I0222 10:24:10.407206       1 trace.go:116] Trace[147996390]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:58.447734638 +0000 UTC m=+42570.290366089) (total time: 11.784184606s):
Trace[147996390]: [10.367157117s] [10.367157117s] Objects listed
I0222 10:24:10.430036       1 trace.go:116] Trace[1904546694]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:23:28.689303059 +0000 UTC m=+42540.531934523) (total time: 41.740641642s):
Trace[1904546694]: [41.740570071s] [41.740570071s] Objects listed
time="2023-02-22T10:24:37Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:24:37Z" level=debug msg="Instance handler updated instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 state, old state running, new state unknown"
time="2023-02-22T10:25:55Z" level=warning msg="Problem killing process pid=30487: os: process already finished"
E0222 10:26:16.061524       1 engine_controller.go:695] failed to update status for engine pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: cannot get volume info: Timeout executing: /var/lib/longhorn/engine-binaries/rancher-mirrored-longhornio-longhorn-engine-v1.2.2/longhorn [--url 10.42.9.215:10001 info], output {
	"name": "pvc-3646eb36-8df0-42c7-851b-f85f9d932e76",
	"size": 64424509440,
	"replicaCount": 3,
	"endpoint": "/dev/longhorn/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76",
	"frontend": "tgt-blockdev",
	"frontendState": "up",
	"isExpanding": false,
	"lastExpansionError": "",
	"lastExpansionFailedAt": ""
}
, stderr, , error <nil>
time="2023-02-22T10:26:16Z" level=info msg="stop monitoring the engine on this node (getlab-run-int02) because the engine has new ownerID cicd-tools02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=debug msg="Stop monitoring engine" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
I0222 10:26:08.468537       1 trace.go:116] Trace[1586757052]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:24.187687811 +0000 UTC m=+42536.030319289) (total time: 2m15.57676209s):
Trace[1586757052]: [1m12.827621582s] [1m12.827621582s] Objects listed
Trace[1586757052]: [2m15.576611629s] [1m2.748989019s] Objects extracted
10.42.10.126 - - [22/Feb/2023:10:23:58 +0000] "GET /metrics HTTP/1.1" 200 955 "" "Prometheus/2.27.1"
I0222 10:26:16.639403       1 trace.go:116] Trace[140422282]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:58.586574881 +0000 UTC m=+42570.429206289) (total time: 2m18.052755263s):
Trace[140422282]: [1m56.575663606s] [1m56.575663606s] Objects listed
Trace[140422282]: [1m58.844267692s] [2.268604086s] Resource version extracted
Trace[140422282]: [2m17.550412669s] [18.706144977s] Objects extracted
time="2023-02-22T10:26:16Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"longhorn-system\", Name:\"getlab-run-int02\", UID:\"87a0dce9-5e21-460a-a6cb-54d1da3de9de\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217101813\", FieldPath:\"\"}): type: 'Warning' reason: 'Schedulable' the disk default-disk-95973be465894c46(/var/lib/longhorn/) on the node getlab-run-int02 is not ready"
W0222 10:26:16.799199       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799225       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Volume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
E0222 10:26:16.799303       1 reflector.go:383] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get "https://10.43.0.1:443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=217354007&timeout=9m5s&timeoutSeconds=545&watch=true": http2: client connection lost
W0222 10:26:16.799247       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799352       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799382       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Backup ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799391       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799406       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.InstanceManager ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799442       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Setting ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799510       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.Replica ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799539       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImageManager ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799552       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.ShareManager ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
E0222 10:26:16.799552       1 volume_controller.go:214] fail to sync longhorn-system/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e: Put "https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/volumes/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e/status": http2: client connection lost
W0222 10:26:16.799566       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.PriorityClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799592       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
E0222 10:26:16.799417       1 reflector.go:383] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get "https://10.43.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=217355176&timeout=7m57s&timeoutSeconds=477&watch=true": http2: client connection lost
E0222 10:26:16.799493       1 reflector.go:383] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: Get "https://10.43.0.1:443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=217352775&timeout=9m6s&timeoutSeconds=546&watch=true": http2: client connection lost
W0222 10:26:16.799597       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.EngineImage ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
time="2023-02-22T10:26:16Z" level=debug msg="removed the engine from ec.engineMonitorMap" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
W0222 10:26:16.799668       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.CronJob ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
E0222 10:26:16.799468       1 replica_controller.go:201] fail to sync replica for longhorn-system/pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e: Put "https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/replicas/pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e/status": http2: client connection lost
E0222 10:26:16.799684       1 request.go:975] Unexpected error when reading response body: http2: client connection lost
E0222 10:26:16.799816       1 volume_controller.go:214] fail to sync longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76: Put "https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/volumes/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76/status": http2: client connection lost
time="2023-02-22T10:26:16Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"longhorn-system\", Name:\"getlab-run-int02\", UID:\"87a0dce9-5e21-460a-a6cb-54d1da3de9de\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217101813\", FieldPath:\"\"}): type: 'Warning' reason: 'Ready' Disk default-disk-95973be465894c46(/var/lib/longhorn/) on node getlab-run-int02 is not ready: failed to get disk config: error: Invalid net namespace /host/proc/1/ns/net, error Timeout executing: nsenter [--net=/host/proc/1/ns/net ip addr], output , stderr, , error <nil>"
W0222 10:26:16.799891       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackupVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799735       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.RecurringJob ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799717       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImage ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
W0222 10:26:16.799765       1 reflector.go:405] github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.BackingImageDataSource ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0222 10:26:16.799792       1 trace.go:116] Trace[807523107]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:30.956076904 +0000 UTC m=+42542.798708378) (total time: 2m45.843651515s):
Trace[807523107]: [2m45.843651515s] [2m45.843651515s] END
E0222 10:26:16.799997       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Secret: Get "https://10.43.0.1:443/api/v1/secrets?resourceVersion=217353275": http2: client connection lost
E0222 10:26:16.800005       1 node_controller.go:263] fail to sync node for longhorn-system/getlab-run-int02: Put "https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/nodes/getlab-run-int02/status": http2: client connection lost
I0222 10:26:16.800099       1 trace.go:116] Trace[606321842]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:44.659270363 +0000 UTC m=+42556.501901792) (total time: 2m32.140774102s):
Trace[606321842]: [2m32.140774102s] [2m32.140774102s] END
E0222 10:26:16.800137       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ConfigMap: unexpected error when reading response body. Please retry. Original error: http2: client connection lost
E0222 10:26:16.804151       1 request.go:975] Unexpected error when reading response body: http2: client connection lost
I0222 10:26:16.804406       1 trace.go:116] Trace[327345949]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:23:58.684366041 +0000 UTC m=+42570.526997498) (total time: 2m18.119950677s):
Trace[327345949]: [2m18.119950677s] [2m18.119950677s] END
E0222 10:26:16.804451       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Pod: unexpected error when reading response body. Please retry. Original error: http2: client connection lost
E0222 10:26:16.869539       1 engine_controller.go:616] failed to update engine pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 to stop monitoring: failed to reset engine status for pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8: Operation cannot be fulfilled on engines.longhorn.io "pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8": the object has been modified; please apply your changes to the latest version and try again
W0222 10:26:16.799688       1 reflector.go:405] k8s.io/client-go/informers/factory.go:135: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
time="2023-02-22T10:26:16Z" level=info msg="stop monitoring the engine on this node (getlab-run-int02) because the engine has new ownerID gilab-web-int01" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=debug msg="Stop monitoring engine" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=debug msg="removed the engine from ec.engineMonitorMap" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
E0222 10:26:16.870715       1 instance_manager_controller.go:1182] failed to update instance map for instance manager instance-manager-r-c73a27d6: Operation cannot be fulfilled on instancemanagers.longhorn.io "instance-manager-r-c73a27d6": the object has been modified; please apply your changes to the latest version and try again
time="2023-02-22T10:26:16Z" level=warning msg="Dropping Longhorn volume longhorn-system/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e out of the queue" controller=longhorn-volume error="fail to sync longhorn-system/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e: Put \"https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/volumes/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e/status\": http2: client connection lost" node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:16Z" level=warning msg="Dropping Longhorn replica longhorn-system/pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e out of the queue" controller=longhorn-replica error="fail to sync replica for longhorn-system/pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e: Put \"https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/replicas/pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e/status\": http2: client connection lost" node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=warning msg="Dropping Longhorn volume longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 out of the queue" controller=longhorn-volume error="fail to sync longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76: Put \"https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/volumes/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76/status\": http2: client connection lost" node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=warning msg="Dropping Longhorn node longhorn-system/getlab-run-int02 out of the queue: fail to sync node for longhorn-system/getlab-run-int02: Put \"https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/nodes/getlab-run-int02/status\": http2: client connection lost"
time="2023-02-22T10:26:16Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"longhorn-system\", Name:\"getlab-run-int02\", UID:\"87a0dce9-5e21-460a-a6cb-54d1da3de9de\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217353914\", FieldPath:\"\"}): type: 'Normal' reason: 'Ready' Node getlab-run-int02 is ready"
time="2023-02-22T10:26:16Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"longhorn-system\", Name:\"getlab-run-int02\", UID:\"87a0dce9-5e21-460a-a6cb-54d1da3de9de\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217353914\", FieldPath:\"\"}): type: 'Warning' reason: 'Ready' Kubernetes node getlab-run-int02 not ready: NodeStatusUnknown"
time="2023-02-22T10:26:16Z" level=debug msg="Requeue engine due to conflict" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 error="Operation cannot be fulfilled on engines.longhorn.io \"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8\": the object has been modified; please apply your changes to the latest version and try again" node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:16Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:16Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
10.42.10.126 - - [22/Feb/2023:10:26:16 +0000] "GET /metrics HTTP/1.1" 200 711 "" "Prometheus/2.27.1"
10.42.10.126 - - [22/Feb/2023:10:26:16 +0000] "GET /metrics HTTP/1.1" 200 711 "" "Prometheus/2.27.1"
time="2023-02-22T10:26:17Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:17Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:17Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:17Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:17Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:18Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:18Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:18Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
E0222 10:26:18.132141       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-24bvws", Obj:(*v1.Pod)(0xc00180ac00)}
E0222 10:26:18.132331       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-4dpkn", Obj:(*v1.Pod)(0xc001ddb400)}
E0222 10:26:18.132436       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-24bvws", Obj:(*v1.Pod)(0xc00180ac00)}
E0222 10:26:18.133848       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-4vqxk7", Obj:(*v1.Pod)(0xc00299e400)}
E0222 10:26:18.135052       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-58qhfn", Obj:(*v1.Pod)(0xc0018e7400)}
E0222 10:26:18.136279       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-4dpkn", Obj:(*v1.Pod)(0xc001ddb400)}
E0222 10:26:18.140335       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-4vqxk7", Obj:(*v1.Pod)(0xc00299e400)}
E0222 10:26:18.141584       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-t98d2", Obj:(*v1.Pod)(0xc001ddac00)}
E0222 10:26:18.142729       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-96psg2", Obj:(*v1.Pod)(0xc002303000)}
E0222 10:26:18.143924       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-58qhfn", Obj:(*v1.Pod)(0xc0018e7400)}
E0222 10:26:18.145073       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-t98d2", Obj:(*v1.Pod)(0xc001ddac00)}
E0222 10:26:18.146306       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-3j7dpx", Obj:(*v1.Pod)(0xc002349c00)}
E0222 10:26:18.147606       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-96psg2", Obj:(*v1.Pod)(0xc002303000)}
E0222 10:26:18.148818       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-ahevdtxy-project-4-concurrent-3j7dpx", Obj:(*v1.Pod)(0xc002349c00)}
E0222 10:26:18.149998       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-yisrdepi-project-5-concurrent-2rgqcf", Obj:(*v1.Pod)(0xc001d41c00)}
E0222 10:26:18.151236       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-yisrdepi-project-5-concurrent-42rqwf", Obj:(*v1.Pod)(0xc001fcb400)}
E0222 10:26:18.152403       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-yisrdepi-project-5-concurrent-2rgqcf", Obj:(*v1.Pod)(0xc001d41c00)}
E0222 10:26:18.153580       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/runner-yisrdepi-project-5-concurrent-42rqwf", Obj:(*v1.Pod)(0xc001fcb400)}
E0222 10:26:18.154762       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-bprms", Obj:(*v1.Pod)(0xc000d5e800)}
E0222 10:26:18.155975       1 kubernetes_pv_controller.go:310] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-gc7p7", Obj:(*v1.Pod)(0xc00219bc00)}
E0222 10:26:18.157626       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-bprms", Obj:(*v1.Pod)(0xc000d5e800)}
E0222 10:26:18.158911       1 kubernetes_pod_controller.go:320] received unexpected obj: cache.DeletedFinalStateUnknown{Key:"gitlab/gitlab-registry-577dcbb7d9-gc7p7", Obj:(*v1.Pod)(0xc00219bc00)}
time="2023-02-22T10:26:18Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:18Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:18Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
I0222 10:26:18.731889       1 request.go:621] Throttling request took 1.095894753s, request: PUT:https://10.43.0.1:443/apis/longhorn.io/v1beta1/namespaces/longhorn-system/volumes/pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e/status
time="2023-02-22T10:26:19Z" level=error msg="Unable to verify the update of instance-manager-r-c73a27d6"
time="2023-02-22T10:26:19Z" level=debug msg="Instance Manager Controller getlab-run-int02 picked up instance-manager-r-c73a27d6" controller=longhorn-instance-manager instanceManager=instance-manager-r-c73a27d6 node=getlab-run-int02 nodeID=getlab-run-int02
time="2023-02-22T10:26:19Z" level=error msg="Unable to verify the update of instance-manager-e-28f5f113"
time="2023-02-22T10:26:19Z" level=debug msg="Instance Manager Controller getlab-run-int02 picked up instance-manager-e-28f5f113" controller=longhorn-instance-manager instanceManager=instance-manager-e-28f5f113 node=getlab-run-int02 nodeID=getlab-run-int02
time="2023-02-22T10:26:19Z" level=debug msg="Stop monitoring instance manager instance-manager-e-28f5f113" controller=longhorn-instance-manager instance manager=instance-manager-e-28f5f113 node=getlab-run-int02
time="2023-02-22T10:26:19Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-e-28f5f113 node=getlab-run-int02
time="2023-02-22T10:26:19Z" level=error msg="error receiving next item in engine watch: rpc error: code = Unavailable desc = transport is closing" controller=longhorn-instance-manager instance manager=instance-manager-r-c73a27d6 node=getlab-run-int02
time="2023-02-22T10:26:19Z" level=debug msg="Stop monitoring instance manager instance-manager-r-c73a27d6" controller=longhorn-instance-manager instance manager=instance-manager-r-c73a27d6 node=getlab-run-int02
time="2023-02-22T10:26:19Z" level=debug msg="removed the engine from imc.instanceManagerMonitorMap" controller=longhorn-instance-manager instance manager=instance-manager-r-c73a27d6 node=getlab-run-int02
time="2023-02-22T10:26:19Z" level=error msg="error receiving next item in engine watch: rpc error: code = Unavailable desc = transport is closing" controller=longhorn-instance-manager instance manager=instance-manager-e-28f5f113 node=getlab-run-int02
time="2023-02-22T10:26:20Z" level=debug msg="Replica controller picked up" controller=longhorn-replica controllerID=getlab-run-int02 dataPath= node=getlab-run-int02 nodeID=getlab-run-int02 ownerID=gitlab-run-int01 replica=pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87
time="2023-02-22T10:26:20Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 node=getlab-run-int02
time="2023-02-22T10:26:20Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:20Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:21Z" level=info msg="Engine got new owner getlab-run-int02" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:21Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:21Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:21Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:21Z" level=debug msg="Requeue longhorn-system/getlab-run-int02 due to conflict: Operation cannot be fulfilled on nodes.longhorn.io \"getlab-run-int02\": the object has been modified; please apply your changes to the latest version and try again"
time="2023-02-22T10:26:22Z" level=debug msg="Volume got new owner getlab-run-int02" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=gilap-web-int02 state=attached volume=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76
time="2023-02-22T10:26:22Z" level=debug msg="Replica controller picked up" controller=longhorn-replica controllerID=getlab-run-int02 dataPath= node=getlab-run-int02 nodeID=getlab-run-int02 ownerID=cicd-monitor01 replica=pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e
time="2023-02-22T10:26:23Z" level=debug msg="Replica controller picked up" controller=longhorn-replica controllerID=getlab-run-int02 dataPath= node=getlab-run-int02 nodeID=getlab-run-int02 ownerID=gilab-web-int03 replica=pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87
time="2023-02-22T10:26:23Z" level=debug msg="Instance Manager Controller getlab-run-int02 picked up instance-manager-e-28f5f113" controller=longhorn-instance-manager instanceManager=instance-manager-e-28f5f113 node=getlab-run-int02 nodeID=getlab-run-int02
time="2023-02-22T10:26:23Z" level=warning msg="The related node getlab-run-int02 of instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:23Z" level=warning msg="The related node getlab-run-int02 of instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 is down or deleted, will mark the instance as state UNKNOWN"
time="2023-02-22T10:26:23Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"longhorn-system\", Name:\"getlab-run-int02\", UID:\"87a0dce9-5e21-460a-a6cb-54d1da3de9de\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356308\", FieldPath:\"\"}): type: 'Normal' reason: 'Ready' Node getlab-run-int02 is ready"
time="2023-02-22T10:26:23Z" level=debug msg="Instance Manager Controller getlab-run-int02 picked up instance-manager-r-c73a27d6" controller=longhorn-instance-manager instanceManager=instance-manager-r-c73a27d6 node=getlab-run-int02 nodeID=getlab-run-int02
time="2023-02-22T10:26:23Z" level=debug msg="Volume got new owner getlab-run-int02" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=gilab-web-int01 state=attached volume=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e
time="2023-02-22T10:26:23Z" level=debug msg="Requeue longhorn-system/getlab-run-int02 due to conflict: Operation cannot be fulfilled on nodes.longhorn.io \"getlab-run-int02\": the object has been modified; please apply your changes to the latest version and try again"
time="2023-02-22T10:26:24Z" level=warning msg="Cannot find the instance manager for the running instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8, will mark the instance as state ERROR"
time="2023-02-22T10:26:24Z" level=debug msg="Instance handler updated instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 state, old state unknown, new state error"
time="2023-02-22T10:26:24Z" level=warning msg="Cannot find the instance manager for the running instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c, will mark the instance as state ERROR"
time="2023-02-22T10:26:24Z" level=debug msg="Instance handler updated instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c state, old state unknown, new state error"
time="2023-02-22T10:26:24Z" level=warning msg="Engine of volume dead unexpectedly, reattach the volume" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=getlab-run-int02 state=attached volume=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e
time="2023-02-22T10:26:24Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e\", UID:\"bde81134-3283-4d81-8296-8748108f4778\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356407\", FieldPath:\"\"}): type: 'Warning' reason: 'DetachedUnexpectly' Engine of volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e dead unexpectedly, reattach the volume"
time="2023-02-22T10:26:24Z" level=warning msg="Engine of volume dead unexpectedly, reattach the volume" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=getlab-run-int02 state=attached volume=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76
time="2023-02-22T10:26:24Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76\", UID:\"c656fb93-3798-4138-8d0d-0b56383a8090\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356378\", FieldPath:\"\"}): type: 'Warning' reason: 'DetachedUnexpectly' Engine of volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 dead unexpectedly, reattach the volume"
time="2023-02-22T10:26:26Z" level=warning msg="Try to get requested log for pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 in instance manager instance-manager-e-28f5f113"
time="2023-02-22T10:26:26Z" level=warning msg="cannot get requested log for instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 on node getlab-run-int02, error invalid Instance Manager instance-manager-e-28f5f113, state: error, IP: "
time="2023-02-22T10:26:26Z" level=debug msg="Instance handler updated instance pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e-e-fbc634e8 state, old state error, new state stopped"
time="2023-02-22T10:26:26Z" level=warning msg="Try to get requested log for pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c in instance manager instance-manager-e-28f5f113"
time="2023-02-22T10:26:26Z" level=warning msg="cannot get requested log for instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c on node getlab-run-int02, error invalid Instance Manager instance-manager-e-28f5f113, state: error, IP: "
time="2023-02-22T10:26:26Z" level=debug msg="Instance handler updated instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c state, old state error, new state stopped"
time="2023-02-22T10:26:27Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e\", UID:\"bde81134-3283-4d81-8296-8748108f4778\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356487\", FieldPath:\"\"}): type: 'Normal' reason: 'Remount' Volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e requested remount at 2023-02-22T10:26:27Z"
time="2023-02-22T10:26:27Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76\", UID:\"c656fb93-3798-4138-8d0d-0b56383a8090\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356491\", FieldPath:\"\"}): type: 'Normal' reason: 'Remount' Volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 requested remount at 2023-02-22T10:26:27Z"
time="2023-02-22T10:26:27Z" level=info msg="Created instance manager pod instance-manager-e-28f5f113 for instance manager instance-manager-e-28f5f113"
time="2023-02-22T10:26:28Z" level=info msg="Created instance manager pod instance-manager-r-c73a27d6 for instance manager instance-manager-r-c73a27d6"
time="2023-02-22T10:26:30Z" level=debug msg="Requeue volume due to error Operation cannot be fulfilled on volumes.longhorn.io \"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e\": the object has been modified; please apply your changes to the latest version and try again or <nil>" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=getlab-run-int02 state=detaching volume=pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e
time="2023-02-22T10:26:30Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e\", UID:\"bde81134-3283-4d81-8296-8748108f4778\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356547\", FieldPath:\"\"}): type: 'Normal' reason: 'Remount' Volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e requested remount at 2023-02-22T10:26:30Z"
time="2023-02-22T10:26:30Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e\", UID:\"bde81134-3283-4d81-8296-8748108f4778\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356547\", FieldPath:\"\"}): type: 'Normal' reason: 'Detached' volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e has been detached"
time="2023-02-22T10:26:32Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76\", UID:\"c656fb93-3798-4138-8d0d-0b56383a8090\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356636\", FieldPath:\"\"}): type: 'Normal' reason: 'Detached' volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 has been detached"
time="2023-02-22T10:26:33Z" level=info msg="Deleted pod gitlab-minio-648d5957d6-k47kk so that Kubernetes will handle remounting volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:33Z" level=info msg="Deleted pod gitlab-minio-648d5957d6-k47kk so that Kubernetes will handle remounting volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:33Z" level=debug msg="Requeue volume due to error Operation cannot be fulfilled on volumes.longhorn.io \"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76\": the object has been modified; please apply your changes to the latest version and try again or <nil>" accessMode=rwo controller=longhorn-volume frontend=blockdev migratable=false node=getlab-run-int02 owner=getlab-run-int02 state=detached volume=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76
time="2023-02-22T10:26:34Z" level=warning msg="Error syncing Longhorn engine" controller=longhorn-engine engine=longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c error="fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: " node=getlab-run-int02
time="2023-02-22T10:26:34Z" level=warning msg="Error syncing Longhorn engine" controller=longhorn-engine engine=longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c error="fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: " node=getlab-run-int02
time="2023-02-22T10:26:34Z" level=warning msg="Error syncing Longhorn engine" controller=longhorn-engine engine=longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c error="fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: " node=getlab-run-int02
E0222 10:26:34.586229       1 engine_controller.go:196] fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: 
time="2023-02-22T10:26:34Z" level=warning msg="Dropping Longhorn engine out of the queue" controller=longhorn-engine engine=longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c error="fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: " node=getlab-run-int02
time="2023-02-22T10:26:36Z" level=info msg="Deleted pod gitlab-prometheus-server-77b5cc946-nhxff so that Kubernetes will handle remounting volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:36Z" level=info msg="Deleted pod gitlab-prometheus-server-77b5cc946-nhxff so that Kubernetes will handle remounting volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:36Z" level=info msg="Deleted pod gitlab-prometheus-server-77b5cc946-nhxff so that Kubernetes will handle remounting volume pvc-2e73556c-0fc7-4d8d-92d3-5ea2b5637f6e" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:37Z" level=debug msg="Start monitoring instance manager instance-manager-r-c73a27d6" controller=longhorn-instance-manager instance manager=instance-manager-r-c73a27d6 node=getlab-run-int02
time="2023-02-22T10:26:38Z" level=debug msg="Start monitoring instance manager instance-manager-e-28f5f113" controller=longhorn-instance-manager instance manager=instance-manager-e-28f5f113 node=getlab-run-int02
time="2023-02-22T10:26:38Z" level=debug msg="Prepare to create instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c"
time="2023-02-22T10:26:38Z" level=info msg="Event(v1.ObjectReference{Kind:\"Engine\", Namespace:\"longhorn-system\", Name:\"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c\", UID:\"79bbffab-f32c-4890-b768-0eb4b1f851d7\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356712\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c"
time="2023-02-22T10:26:39Z" level=debug msg="Instance process pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c had been created, need to wait for instance manager update"
time="2023-02-22T10:26:39Z" level=debug msg="Instance handler updated instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c state, old state stopped, new state starting"
time="2023-02-22T10:26:39Z" level=debug msg="Prepare to create instance pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e"
time="2023-02-22T10:26:39Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e\", UID:\"4675c5bc-2650-44be-8c4e-459c18d0961e\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356802\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e"
time="2023-02-22T10:26:39Z" level=debug msg="Instance pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e starts running, IP 10.42.9.105"
time="2023-02-22T10:26:39Z" level=debug msg="Instance pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e starts running, Port 10000"
time="2023-02-22T10:26:39Z" level=debug msg="Instance handler updated instance pvc-5d333ff6-8f37-4417-ba07-80ac9894aa00-r-445efc4e state, old state stopped, new state running"
time="2023-02-22T10:26:41Z" level=info msg="Deleted pod gitlab-minio-648d5957d6-k47kk so that Kubernetes will handle remounting volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76" controller=longhorn-kubernetes-pod node=getlab-run-int02
time="2023-02-22T10:26:42Z" level=debug msg="Instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c starts running, IP 10.42.9.106"
time="2023-02-22T10:26:42Z" level=debug msg="Instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c starts running, Port 10000"
time="2023-02-22T10:26:42Z" level=debug msg="Instance handler updated instance pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c state, old state starting, new state running"
time="2023-02-22T10:26:42Z" level=debug msg="Start monitoring engine" controller=longhorn-engine engine=pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c node=getlab-run-int02
time="2023-02-22T10:26:42Z" level=info msg="Event(v1.ObjectReference{Kind:\"Volume\", Namespace:\"longhorn-system\", Name:\"pvc-3646eb36-8df0-42c7-851b-f85f9d932e76\", UID:\"c656fb93-3798-4138-8d0d-0b56383a8090\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356832\", FieldPath:\"\"}): type: 'Normal' reason: 'Attached' volume pvc-3646eb36-8df0-42c7-851b-f85f9d932e76 has been attached to getlab-run-int02"
time="2023-02-22T10:26:43Z" level=debug msg="Prepare to create instance pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87"
time="2023-02-22T10:26:43Z" level=info msg="Event(v1.ObjectReference{Kind:\"Replica\", Namespace:\"longhorn-system\", Name:\"pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87\", UID:\"24910ac5-a85f-4736-a380-c85156470926\", APIVersion:\"longhorn.io/v1beta1\", ResourceVersion:\"217356861\", FieldPath:\"\"}): type: 'Normal' reason: 'Start' Starts pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87"
time="2023-02-22T10:26:43Z" level=debug msg="Instance pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87 starts running, IP 10.42.9.105"
time="2023-02-22T10:26:43Z" level=debug msg="Instance pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87 starts running, Port 10015"
time="2023-02-22T10:26:43Z" level=debug msg="Instance handler updated instance pvc-14ac7186-ee21-4b37-8d80-b4307f1f2b69-r-00905b87 state, old state stopped, new state running"
I0222 10:26:45.313436       1 trace.go:116] Trace[835389258]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:26:18.040362967 +0000 UTC m=+42709.882994472) (total time: 27.272954341s):
Trace[835389258]: [27.272890408s] [27.272890408s] Objects listed
10.42.10.126 - - [22/Feb/2023:10:26:53 +0000] "GET /metrics HTTP/1.1" 200 995 "" "Prometheus/2.27.1"
I0222 10:26:56.080112       1 trace.go:116] Trace[980592976]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:26:17.879107287 +0000 UTC m=+42709.721738751) (total time: 38.200929714s):
Trace[980592976]: [38.200869089s] [38.200869089s] Objects listed
I0222 10:26:57.510865       1 trace.go:116] Trace[1678498869]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2023-02-22 10:26:17.728839431 +0000 UTC m=+42709.571470928) (total time: 39.781943113s):
Trace[1678498869]: [39.781844612s] [39.781844612s] Objects listed
I0222 10:27:00.928273       1 trace.go:116] Trace[1985815069]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:26:18.004957351 +0000 UTC m=+42709.847588812) (total time: 42.923253741s):
Trace[1985815069]: [42.923196806s] [42.923196806s] Objects listed
I0222 10:27:01.745594       1 trace.go:116] Trace[289389631]: "Reflector ListAndWatch" name:github.com/longhorn/longhorn-manager/k8s/pkg/client/informers/externalversions/factory.go:117 (started: 2023-02-22 10:26:18.355574679 +0000 UTC m=+42710.198206170) (total time: 43.389940636s):
Trace[289389631]: [43.389849162s] [43.389849162s] Objects listed
10.42.10.126 - - [22/Feb/2023:10:27:53 +0000] "GET /metrics HTTP/1.1" 200 1066 "" "Prometheus/2.27.1"

Some additional logs:

I0216 13:23:49.346253       1 csi_handler.go:228] Error processing "csi-57fcb8c58465d04ad51a2197a689c82d8ff578e7ace8827b74d106073af663ff": failed to attach: rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [detail=, message=unable to attach volume pvc-07c16c86-ebf4-4cf7-8399-a424e9bd68d3 to gitlab-run-int01: node gitlab-run-int01 is not ready, couldn't attach volume pvc-07c16c86-ebf4-4cf7-8399-a424e9bd68d3 to it, code=Server Error] from [http://longhorn-backend:9500/v1/volumes/pvc-07c16c86-ebf4-4cf7-8399-a424e9bd68d3?action=attach]

E0214 15:57:21.910427       1 leaderelection.go:325] error retrieving resource lock longhorn-system/driver-longhorn-io: Get "https://10.43.0.1:443/apis/coordination.k8s.io/v1/namespaces/longhorn-system/leases/driver-longhorn-io": http2: client connection lost
E0214 15:57:30.339763       1 leaderelection.go:325] error retrieving resource lock longhorn-system/driver-longhorn-io: Get "https://10.43.0.1:443/apis/coordination.k8s.io/v1/namespaces/longhorn-system/leases/driver-longhorn-io": dial tcp 10.43.0.1:443: connect: no route to host

E0214 15:57:25.560117       1 leaderelection.go:325] error retrieving resource lock longhorn-system/external-resizer-driver-longhorn-io: Get "https://10.43.0.1:443/apis/coordination.k8s.io/v1/namespaces/longhorn-system/leases/external-resizer-driver-longhorn-io": http2: client connection lost

E0222 15:39:56.377301       1 reflector.go:127] github.com/kubernetes-csi/external-snapshotter/client/v3/informers/externalversions/factory.go:117: Failed to watch v1beta1.VolumeSnapshotContent: failed to list v1beta1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)

time="2023-02-22T10:26:34Z" level=warning msg="Error syncing Longhorn engine" controller=longhorn-engine engine=longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c error="fail to sync engine for longhorn-system/pvc-3646eb36-8df0-42c7-851b-f85f9d932e76-e-68845f8c: invalid Instance Manager instance-manager-e-28f5f113, state: starting, IP: " node=getlab-run-int02

Does the issue still remain? What’s your LH version?