Description
Deployed Kafka Bitnami helm chart on a 6 node Rancher Kubernetes bare metal cluster. It was working fine for 18 days then yesterday it started throwing errors about “failed to attach disk” “iscsiadm: Could not login to”.
Expected Behavior
kafka-zookeeper-0 should be able to run and stay running.
Current Behavior
-
Events from kafka-zookeeper-0
:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12m (x10 over 13m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Scheduled 12m default-scheduler Successfully assigned default/kafka-zookeeper-0 to server3
Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c"
Warning FailedMount 10m kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session75
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session76
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 8)
Warning FailedMount 8m52s kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session79
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (5 - encountered iSCSI login failure)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 5)
Warning FailedMount 6m50s kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session81
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 8)
Warning FailedMount 4m48s kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session83
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (5 - encountered iSCSI login failure)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 5)
Warning FailedMount 2m43s kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session85
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (8 - connection timed out)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 8)
Warning FailedMount 111s (x5 over 10m) kubelet, server3 Unable to mount volumes for pod "kafka-zookeeper-0_default(520b7d19-c82d-11e9-bca0-246e9647129c)": timeout expired waiting for volumes to attach or mount for pod "default"/"kafka-zookeeper-0". list of unmounted volumes=[data]. list of unattached volumes=[data default-token-nxb57]
Warning FailedMount 35s kubelet, server3 MountVolume.WaitForAttach failed for volume "pvc-520a9c63-c82d-11e9-bca0-246e9647129c" : failed to get any path for iscsi disk, last err seen:
iscsi: failed to attach disk: Error: iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session87
iscsiadm: Could not login to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260].
iscsiadm: initiator reported error (5 - encountered iSCSI login failure)
iscsiadm: Could not log into all portals
Logging in to [iface: default, target: iqn.2016-09.com.openebs.cstor:pvc-520a9c63-c82d-11e9-bca0-246e9647129c, portal: 10.43.229.40,3260] (multiple)
(exit status 5)
Steps to Reproduce
Create an openebs-cstor-disk
StorageClass.
Deploy Bitnami Kafka helm chart using the openebs-cstor-disk
StorageClass.
What I’ve tried so far
I redeployed the pod on a different node by cordoning off the current node it was on. Even though it was on a different node, it still gave the same error. kafka-zookeeper-1 and kafka-zookeeper-2 would delete and redeploy without any issues. It was always kafka-zookeeper-0 that had the recurring issue. I’ve tried redeploying one at a time. I tried changing the storage class that it was using to openebs-local and openebs-device. I’ve also tried openebs-jiva-default.
Your Environment
Using Rancher v2.2.7.
-
systemctl status iscsid
:
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-08 10:44:05 PDT; 2 weeks 4 days ago
Docs: man:iscsid(8)
Process: 2123 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
Process: 2092 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
Main PID: 2128 (iscsid)
Tasks: 2
Memory: 3.2M
CPU: 9min 17.251s
CGroup: /system.slice/iscsid.service
├─2127 /sbin/iscsid
└─2128 /sbin/iscsid
Aug 26 11:28:33 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:36 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:39 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:42 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:45 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:48 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:51 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:54 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:28:57 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
Aug 26 11:29:00 server5 iscsid[2127]: conn 0 login rejected: target error (03/01)
-
kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
server1 Ready,SchedulingDisabled controlplane,etcd,worker 172d v1.13.4
server2 Ready,SchedulingDisabled controlplane,etcd,worker 172d v1.13.4
server3 Ready controlplane,etcd,worker 172d v1.13.4
server4 Ready,SchedulingDisabled controlplane,etcd,worker 172d v1.13.4
server5 Ready controlplane,etcd,worker 172d v1.13.4
server6 Ready,SchedulingDisabled controlplane,etcd,worker 172d v1.13.4
-
kubectl get pods --all-namespaces
:
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-prometheus-p-9jmpq grafana-project-monitoring-5b55d6798d-dgsfb 2/2 Running 0 3d2h
cattle-prometheus-p-9jmpq prometheus-project-monitoring-0 4/4 Running 1 3d2h
cattle-prometheus-p-h625k grafana-project-monitoring-5b55d6798d-c675w 2/2 Running 0 3d2h
cattle-prometheus-p-h625k prometheus-project-monitoring-0 3/4 CrashLoopBackOff 837 3d2h
cattle-prometheus exporter-kube-state-cluster-monitoring-58f946d4d7-q6d9v 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-4qg87 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-6r9wk 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-gbs9n 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-mp6f8 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-nhn6s 1/1 Running 0 3d19h
cattle-prometheus exporter-node-cluster-monitoring-nr79g 1/1 Running 0 3d19h
cattle-prometheus grafana-cluster-monitoring-65689ff45c-7dtmp 2/2 Running 0 3d19h
cattle-prometheus prometheus-cluster-monitoring-0 5/5 Running 1 3d3h
cattle-prometheus prometheus-operator-monitoring-operator-85cbcb85b-m4x6b 1/1 Running 0 3d20h
cattle-system cattle-cluster-agent-69c65cd68b-r2xpb 1/1 Running 0 16d
cattle-system cattle-node-agent-2wk5f 1/1 Running 0 16d
cattle-system cattle-node-agent-9l68n 1/1 Running 0 16d
cattle-system cattle-node-agent-h4zjh 1/1 Running 0 16d
cattle-system cattle-node-agent-q8s5b 1/1 Running 0 16d
cattle-system cattle-node-agent-rcpqm 1/1 Running 2 16d
cattle-system cattle-node-agent-wj8jl 1/1 Running 0 16d
default consul-5gqnp 1/1 Running 0 17d
default consul-62476 1/1 Running 0 17d
default consul-kq5nn 1/1 Running 0 17d
default consul-qkpwb 1/1 Running 1 17d
default consul-server-0 1/1 Running 0 18d
default consul-server-1 1/1 Running 0 18d
default consul-server-2 1/1 Running 0 18d
default consul-smtdz 1/1 Running 0 18d
default consul-x54rd 1/1 Running 0 18d
default cstor-disk-atl8-64584f475-6scrh 3/3 Running 0 18d
default cstor-disk-fx5x-54547b5648-bbjpr 3/3 Running 13 18d
default cstor-disk-il9t-6898cd578c-cfqbg 3/3 Running 0 18d
default cstor-disk-pmz1-7ffc589dc4-9wx88 3/3 Running 3 18d
default cstor-disk-z8rf-58db67b954-cdm7c 3/3 Running 0 18d
default elk-elasticsearch-client-68697747f-fjkqs 1/1 Running 0 13d
default elk-elasticsearch-client-68697747f-p4j6s 1/1 Running 0 13d
default elk-elasticsearch-client-68697747f-vhrwz 1/1 Running 0 13d
default elk-elasticsearch-data-0 1/1 Running 0 13d
default elk-elasticsearch-data-1 1/1 Running 0 13d
default elk-elasticsearch-master-0 1/1 Running 0 13d
default elk-elasticsearch-master-1 1/1 Running 0 13d
default elk-elasticsearch-master-2 1/1 Running 0 13d
default elk-kibana-7dffb6669b-tv4xc 1/1 Running 0 13d
default kafka-0 1/2 CrashLoopBackOff 6 10m
default kafka-1 1/2 CrashLoopBackOff 6 10m
default kafka-2 0/2 ContainerCreating 0 10m
default kafka-zookeeper-0 0/1 ContainerCreating 0 10m
default kafka-zookeeper-1 1/1 Running 0 10m
default kafka-zookeeper-2 0/1 ContainerCreating 0 10m
default kesfirehose-66f8b7b6db-5kdf4 1/1 Running 1 2d20h
default openebs-admission-server-7765bcf6c8-jn2mw 1/1 Running 0 18d
default openebs-apiserver-5887b4c897-8bzmz 1/1 Running 0 18d
default openebs-localpv-provisioner-54b8f49448-nqk5k 1/1 Running 2 18d
default openebs-ndm-667lp 1/1 Running 1 14d
default openebs-ndm-fwl2g 1/1 Running 0 14d
default openebs-ndm-k6skg 1/1 Running 0 14d
default openebs-ndm-operator-877dc9bbf-jzf55 1/1 Running 2 18d
default openebs-ndm-q4phr 1/1 Running 0 14d
default openebs-ndm-qhg8j 1/1 Running 0 14d
default openebs-ndm-smtm9 1/1 Running 0 14d
default openebs-provisioner-75cb59c8dc-k4xt7 1/1 Running 2 18d
default openebs-snapshot-operator-689995d579-jf5w8 2/2 Running 2 14d
default pvc-183d1ed8-b958-11e9-9327-246e964713cc-target-75b4f7d677zkg67 3/3 Running 0 14d
default pvc-18407d32-b958-11e9-9327-246e964713cc-target-5995795d48q27mm 3/3 Running 0 14d
default pvc-1841fba9-b958-11e9-9327-246e964713cc-target-6d5fc796f5pxww8 3/3 Running 0 14d
default pvc-2060e51f-bd55-11e9-9895-246e96472fd4-target-6dd5c8fb88wv6qp 3/3 Running 0 13d
default pvc-42ebb346-c52d-11e9-a8d8-246e96472fd4-target-65b9d6bb87s4h4q 3/3 Running 0 3d19h
default pvc-4b74d121-bd55-11e9-9895-246e96472fd4-target-5dbd7cbfc8cn7nz 3/3 Running 0 13d
default pvc-4f9ec197-c5bc-11e9-a8d8-246e96472fd4-target-69c6957f94q7656 3/3 Running 0 3d2h
default pvc-4fe85389-c5bc-11e9-bca0-246e9647129c-target-7f857d99fbklpgj 3/3 Running 0 3d2h
default pvc-520a9c63-c82d-11e9-bca0-246e9647129c-target-7bf7bf6dbbgng6f 3/3 Running 0 10m
default pvc-520c1f0a-c82d-11e9-bca0-246e9647129c-target-78f47bf9d6dxlnq 3/3 Running 0 10m
default pvc-520d4d6e-c82d-11e9-bca0-246e9647129c-target-6bd99fc658nsrzh 3/3 Running 0 10m
default pvc-52106708-c82d-11e9-bca0-246e9647129c-target-d56f6dc4c-gvmp6 3/3 Running 0 10m
default pvc-52118f9a-c82d-11e9-bca0-246e9647129c-target-7c87bd794f795j9 3/3 Running 0 10m
default pvc-52129ccb-c82d-11e9-bca0-246e9647129c-target-554bb8fd49kf59h 3/3 Running 0 10m
default pvc-645b2bfa-c5b9-11e9-bca0-246e9647129c-target-6bd44c74dbs28dw 3/3 Running 0 3d3h
default pvc-6d6619be-bd5c-11e9-9895-246e96472fd4-target-5984947ccbpkjvs 3/3 Running 0 13d
default pvc-76fb5d53-c829-11e9-bca0-246e9647129c-ctrl-745c45ff96-88wtx 2/2 Running 0 38m
default pvc-76fb5d53-c829-11e9-bca0-246e9647129c-rep-68997694f6-2rbvl 1/1 Running 0 38m
default pvc-76fb5d53-c829-11e9-bca0-246e9647129c-rep-68997694f6-wqft9 0/1 Pending 0 38m
default pvc-76fb5d53-c829-11e9-bca0-246e9647129c-rep-68997694f6-wrp4z 1/1 Running 0 38m
default pvc-76fce4db-c829-11e9-bca0-246e9647129c-ctrl-6999bccf8-dmxhq 2/2 Running 0 38m
default pvc-76fce4db-c829-11e9-bca0-246e9647129c-rep-55b68877f-29b9k 1/1 Running 0 38m
default pvc-76fce4db-c829-11e9-bca0-246e9647129c-rep-55b68877f-j2qjc 1/1 Running 0 38m
default pvc-76fce4db-c829-11e9-bca0-246e9647129c-rep-55b68877f-lhrn8 0/1 Pending 0 38m
default pvc-76fe2152-c829-11e9-bca0-246e9647129c-ctrl-67f4f57fc4-jtblx 2/2 Running 0 38m
default pvc-76fe2152-c829-11e9-bca0-246e9647129c-rep-7dcc7d897d-qjh8f 0/1 Pending 0 38m
default pvc-76fe2152-c829-11e9-bca0-246e9647129c-rep-7dcc7d897d-t44v2 1/1 Running 0 38m
default pvc-76fe2152-c829-11e9-bca0-246e9647129c-rep-7dcc7d897d-zftv5 1/1 Running 0 38m
default pvc-77019ed1-c829-11e9-bca0-246e9647129c-ctrl-5d958b8c5d-5h72x 2/2 Running 0 38m
default pvc-77019ed1-c829-11e9-bca0-246e9647129c-rep-699ccddb5d-dlqqk 1/1 Running 0 38m
default pvc-77019ed1-c829-11e9-bca0-246e9647129c-rep-699ccddb5d-l56xj 0/1 Pending 0 38m
default pvc-77019ed1-c829-11e9-bca0-246e9647129c-rep-699ccddb5d-tm5d6 1/1 Running 0 38m
default pvc-7702cb66-c829-11e9-bca0-246e9647129c-ctrl-864b55dcd-d2jpv 2/2 Running 0 38m
default pvc-7702cb66-c829-11e9-bca0-246e9647129c-rep-58d489f4f7-6xg9c 0/1 Pending 0 38m
default pvc-7702cb66-c829-11e9-bca0-246e9647129c-rep-58d489f4f7-gkwlw 1/1 Running 0 38m
default pvc-7702cb66-c829-11e9-bca0-246e9647129c-rep-58d489f4f7-rx9wg 1/1 Running 0 38m
default pvc-7703e236-c829-11e9-bca0-246e9647129c-ctrl-57b956886f-2p8bt 2/2 Running 0 38m
default pvc-7703e236-c829-11e9-bca0-246e9647129c-rep-589766f95b-7wxll 1/1 Running 0 38m
default pvc-7703e236-c829-11e9-bca0-246e9647129c-rep-589766f95b-dv64g 0/1 Pending 0 38m
default pvc-7703e236-c829-11e9-bca0-246e9647129c-rep-589766f95b-z99kz 1/1 Running 0 38m
default pvc-d1c4de59-c5bb-11e9-a8d8-246e96472fd4-target-5f9b4845959tx5g 3/3 Running 0 3d2h
default pvc-d1e5d32f-c5bb-11e9-bca0-246e9647129c-target-57b8ffd58bz6b9k 3/3 Running 0 3d2h
default pvc-ec86ec2e-bd54-11e9-9895-246e96472fd4-target-7469f7cb5977lbl 3/3 Running 0 13d
default pvc-ec89e6ca-bd54-11e9-9895-246e96472fd4-target-7696956d86mksx7 3/3 Running 0 13d
default testclient 1/1 Running 0 18d
ingress-nginx default-http-backend-7f8fbb85db-j4mkr 1/1 Running 1 172d
ingress-nginx nginx-ingress-controller-2nfzv 1/1 Running 1 172d
ingress-nginx nginx-ingress-controller-8h7pr 1/1 Running 1 172d
ingress-nginx nginx-ingress-controller-c9znx 1/1 Running 1 172d
ingress-nginx nginx-ingress-controller-jvnxz 1/1 Running 0 172d
ingress-nginx nginx-ingress-controller-v6lpw 1/1 Running 2 172d
ingress-nginx nginx-ingress-controller-xzt2g 1/1 Running 1 172d
kube-system canal-2wwdb 2/2 Running 0 6d19h
kube-system canal-6z8vn 2/2 Running 0 6d19h
kube-system canal-hn4lt 2/2 Running 0 6d19h
kube-system canal-lhh24 2/2 Running 0 6d19h
kube-system canal-rsd9w 2/2 Running 0 6d19h
kube-system canal-vqkmk 2/2 Running 0 6d19h
kube-system kube-dns-667c7cb9dd-r4f4n 3/3 Running 0 6d19h
kube-system kube-dns-autoscaler-577d74d8b5-9zw8t 1/1 Running 0 6d19h
kube-system metrics-server-7fbd549b78-ffppz 1/1 Running 1 172d
kube-system rke-ingress-controller-deploy-job-cvxqm 0/1 Completed 0 172d
kube-system rke-kube-dns-addon-deploy-job-h4j69 0/1 Completed 0 6d19h
kube-system rke-kubedns-addon-deploy-job-xgx5n 0/1 Completed 0 172d
kube-system rke-metrics-addon-deploy-job-q2l25 0/1 Completed 0 172d
kube-system rke-network-plugin-deploy-job-24b9k 0/1 Completed 0 6d19h
kube-system tiller-deploy-b6647fc9d-mfp47 1/1 Running 1 19d
-
kubectl get services
:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
admission-server-svc ClusterIP 10.43.32.12 <none> 443/TCP 18d
consul-dns ClusterIP 10.43.117.234 <none> 53/TCP,53/UDP 18d
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 18d
consul-ui NodePort 10.43.249.102 <none> 80:31608/TCP 18d
elk-elasticsearch-client ClusterIP 10.43.51.70 <none> 9200/TCP 13d
elk-elasticsearch-discovery ClusterIP None <none> 9300/TCP 13d
elk-kibana ClusterIP 10.43.44.22 <none> 443/TCP 13d
kafka ClusterIP 10.43.80.9 <none> 9092/TCP 11m
kafka-headless ClusterIP None <none> 9092/TCP 11m
kafka-zookeeper ClusterIP 10.43.56.128 <none> 2181/TCP,2888/TCP,3888/TCP 11m
kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 11m
kesfirehose NodePort 10.43.254.67 <none> 80:30557/TCP 18d
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 172d
openebs-apiservice ClusterIP 10.43.145.110 <none> 5656/TCP 18d
pvc-183d1ed8-b958-11e9-9327-246e964713cc ClusterIP 10.43.18.110 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 18d
pvc-18407d32-b958-11e9-9327-246e964713cc ClusterIP 10.43.54.32 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 18d
pvc-1841fba9-b958-11e9-9327-246e964713cc ClusterIP 10.43.11.152 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 18d
pvc-2060e51f-bd55-11e9-9895-246e96472fd4 ClusterIP 10.43.228.195 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 13d
pvc-42ebb346-c52d-11e9-a8d8-246e96472fd4 ClusterIP 10.43.32.33 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d19h
pvc-4b74d121-bd55-11e9-9895-246e96472fd4 ClusterIP 10.43.19.99 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 13d
pvc-4f9ec197-c5bc-11e9-a8d8-246e96472fd4 ClusterIP 10.43.253.138 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d2h
pvc-4fe85389-c5bc-11e9-bca0-246e9647129c ClusterIP 10.43.162.141 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d2h
pvc-520a9c63-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.229.40 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-520c1f0a-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.61.122 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-520d4d6e-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.193.132 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-52106708-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.1.178 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-52118f9a-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.224.8 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-52129ccb-c82d-11e9-bca0-246e9647129c ClusterIP 10.43.78.19 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 11m
pvc-645b2bfa-c5b9-11e9-bca0-246e9647129c ClusterIP 10.43.13.153 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d3h
pvc-6d6619be-bd5c-11e9-9895-246e96472fd4 ClusterIP 10.43.128.246 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 13d
pvc-76fb5d53-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.27.210 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-76fce4db-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.229.81 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-76fe2152-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.91.48 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-77019ed1-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.228.157 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-7702cb66-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.110.243 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-7703e236-c829-11e9-bca0-246e9647129c-ctrl-svc ClusterIP 10.43.228.30 <none> 3260/TCP,9501/TCP,9500/TCP 39m
pvc-d1c4de59-c5bb-11e9-a8d8-246e96472fd4 ClusterIP 10.43.15.54 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d2h
pvc-d1e5d32f-c5bb-11e9-bca0-246e9647129c ClusterIP 10.43.30.101 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 3d2h
pvc-ec86ec2e-bd54-11e9-9895-246e96472fd4 ClusterIP 10.43.235.49 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 13d
pvc-ec89e6ca-bd54-11e9-9895-246e96472fd4 ClusterIP 10.43.197.19 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 13d
-
kubectl get sc
:
NAME PROVISIONER AGE
openebs-cstor-disk openebs.io/provisioner-iscsi 18d
openebs-jiva-default openebs.io/provisioner-iscsi 18d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 18d
-
kubectl get pv
:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-183d1ed8-b958-11e9-9327-246e964713cc 10Gi RWO Delete Bound default/data-default-consul-server-0 openebs-cstor-disk 18d
pvc-18407d32-b958-11e9-9327-246e964713cc 10Gi RWO Delete Bound default/data-default-consul-server-1 openebs-cstor-disk 18d
pvc-1841fba9-b958-11e9-9327-246e964713cc 10Gi RWO Delete Bound default/data-default-consul-server-2 openebs-cstor-disk 18d
pvc-2060e51f-bd55-11e9-9895-246e96472fd4 4Gi RWO Delete Bound default/data-elk-elasticsearch-master-1 openebs-cstor-disk 13d
pvc-42ebb346-c52d-11e9-a8d8-246e96472fd4 10Gi RWO Delete Bound cattle-prometheus/grafana-cluster-monitoring openebs-cstor-disk 3d19h
pvc-4b74d121-bd55-11e9-9895-246e96472fd4 4Gi RWO Delete Bound default/data-elk-elasticsearch-master-2 openebs-cstor-disk 13d
pvc-4f9ec197-c5bc-11e9-a8d8-246e96472fd4 10Gi RWO Delete Bound cattle-prometheus-p-h625k/grafana-project-monitoring openebs-cstor-disk 3d2h
pvc-4fe85389-c5bc-11e9-bca0-246e9647129c 50Gi RWO Delete Bound cattle-prometheus-p-h625k/hulk-prometheus-project-monitoring-0 openebs-cstor-disk 3d2h
pvc-520a9c63-c82d-11e9-bca0-246e9647129c 8Gi RWO Delete Bound default/data-kafka-zookeeper-0 openebs-cstor-disk 11m
pvc-520c1f0a-c82d-11e9-bca0-246e9647129c 8Gi RWO Delete Bound default/data-kafka-zookeeper-1 openebs-cstor-disk 11m
pvc-520d4d6e-c82d-11e9-bca0-246e9647129c 8Gi RWO Delete Bound default/data-kafka-zookeeper-2 openebs-cstor-disk 11m
pvc-52106708-c82d-11e9-bca0-246e9647129c 256Gi RWO Delete Bound default/data-kafka-0 openebs-cstor-disk 11m
pvc-52118f9a-c82d-11e9-bca0-246e9647129c 256Gi RWO Delete Bound default/data-kafka-1 openebs-cstor-disk 11m
pvc-52129ccb-c82d-11e9-bca0-246e9647129c 256Gi RWO Delete Bound default/data-kafka-2 openebs-cstor-disk 11m
pvc-645b2bfa-c5b9-11e9-bca0-246e9647129c 50Gi RWO Delete Bound cattle-prometheus/stanlee-prometheus-cluster-monitoring-0 openebs-cstor-disk 3d3h
pvc-6d6619be-bd5c-11e9-9895-246e96472fd4 30Gi RWO Delete Bound default/data-elk-elasticsearch-data-1 openebs-cstor-disk 13d
pvc-d1c4de59-c5bb-11e9-a8d8-246e96472fd4 10Gi RWO Delete Bound cattle-prometheus-p-9jmpq/grafana-project-monitoring openebs-cstor-disk 3d2h
pvc-d1e5d32f-c5bb-11e9-bca0-246e9647129c 50Gi RWO Delete Bound cattle-prometheus-p-9jmpq/ironman-prometheus-project-monitoring-0 openebs-cstor-disk 3d2h
pvc-ec86ec2e-bd54-11e9-9895-246e96472fd4 30Gi RWO Delete Bound default/data-elk-elasticsearch-data-0 openebs-cstor-disk 13d
pvc-ec89e6ca-bd54-11e9-9895-246e96472fd4 4Gi RWO Delete Bound default/data-elk-elasticsearch-master-0 openebs-cstor-disk 13d
-
kubectl get pvc
:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-default-consul-server-0 Bound pvc-183d1ed8-b958-11e9-9327-246e964713cc 10Gi RWO openebs-cstor-disk 18d
data-default-consul-server-1 Bound pvc-18407d32-b958-11e9-9327-246e964713cc 10Gi RWO openebs-cstor-disk 18d
data-default-consul-server-2 Bound pvc-1841fba9-b958-11e9-9327-246e964713cc 10Gi RWO openebs-cstor-disk 18d
data-elk-elasticsearch-data-0 Bound pvc-ec86ec2e-bd54-11e9-9895-246e96472fd4 30Gi RWO openebs-cstor-disk 13d
data-elk-elasticsearch-data-1 Bound pvc-6d6619be-bd5c-11e9-9895-246e96472fd4 30Gi RWO openebs-cstor-disk 13d
data-elk-elasticsearch-master-0 Bound pvc-ec89e6ca-bd54-11e9-9895-246e96472fd4 4Gi RWO openebs-cstor-disk 13d
data-elk-elasticsearch-master-1 Bound pvc-2060e51f-bd55-11e9-9895-246e96472fd4 4Gi RWO openebs-cstor-disk 13d
data-elk-elasticsearch-master-2 Bound pvc-4b74d121-bd55-11e9-9895-246e96472fd4 4Gi RWO openebs-cstor-disk 13d
data-kafka-0 Bound pvc-52106708-c82d-11e9-bca0-246e9647129c 256Gi RWO openebs-cstor-disk 12m
data-kafka-1 Bound pvc-52118f9a-c82d-11e9-bca0-246e9647129c 256Gi RWO openebs-cstor-disk 12m
data-kafka-2 Bound pvc-52129ccb-c82d-11e9-bca0-246e9647129c 256Gi RWO openebs-cstor-disk 12m
data-kafka-zookeeper-0 Bound pvc-520a9c63-c82d-11e9-bca0-246e9647129c 8Gi RWO openebs-cstor-disk 12m
data-kafka-zookeeper-1 Bound pvc-520c1f0a-c82d-11e9-bca0-246e9647129c 8Gi RWO openebs-cstor-disk 12m
data-kafka-zookeeper-2 Bound pvc-520d4d6e-c82d-11e9-bca0-246e9647129c 8Gi RWO openebs-cstor-disk 12m
- OS (from
/etc/os-release
):
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (from
uname -a
):
Linux server1 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux