Yeah, followed that… I tried step by step below:
Step1:
sudo find / -name longhorn-disk.cfg
/var/lib/longhorn/longhorn-disk.cfg
Step 2:
cd /var/lib/longhorn/replicas/
ls
pvc-005b7694-e8a4-4a87-ab7a-3986c6318164-241d81a2
pvc-1b6bb9bd-206d-491f-a6bb-ad1553b075ac-70bfde04
pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0
pvc-2d3dae19-1388-430c-bb35-45111ad39e25-9ba908fe
pvc-34c9ace3-aae4-4f87-b51e-d25718631e37-7a4d1e2a
pvc-4c0d4b5c-ab39-4592-8128-b1c4a5d54972-2db95685
pvc-68e2b09a-95f6-4191-87bc-9cf4d6eb9926-f8121b6c
pvc-6f1ab933-ca06-41de-a882-00c78e3444bc-fbbfda5b
pvc-736cdee3-312e-4a0f-8630-a2d3ef6437de-9f45c8ca
pvc-92466a44-c299-4578-b04e-b881274928e8-7d9d6a2f
pvc-92466a44-c299-4578-b04e-b881274928e8-f68e97fc
pvc-ab97f936-87e1-47ce-bf76-b6e3120225b0-1fdf30eb
pvc-ab97f936-87e1-47ce-bf76-b6e3120225b0-915d4eca
pvc-b6ae6473-fce8-458a-be92-8a9f644f38ae-2fb46bac
pvc-df17d795-d8c2-4cc7-a018-26bce2e6fd04-915aebb7
pvc-fc385664-9c92-418e-8c5c-e74e66270279-700a4e23
Step 3:
sudo lsof pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0
returns nothing
Step 4:
sudo cat pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0/volume.meta
{"Size":107374182400,"Head":"volume-head-001.img","Dirty":true,"Rebuilding":true,"Error":"","Parent":"volume-snap-b48e0e59-7efd-4b80-872f-587f065013ad.img","SectorSize":512,"BackingFilePath":""}
Step 5: (on a different terminal)
docker run -v /dev:/host/dev -v /proc:/host/proc -v /apps/pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0/volume:/volume --privileged longhornio/longhorn-engine:v1.2.0 launch-simple-longhorn pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0 107374182400
output:
+ set -e
+ mount --rbind /host/dev /dev
+ volume=pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0
+ size=107374182400
+ frontend=
+ '[' -z pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0 ']'
+ '[' -z 107374182400 ']'
+ '[' -z ']'
Use default frontend TGT block device
+ echo Use default frontend TGT block device
+ frontend=tgt-blockdev
+ exec longhorn-instance-manager daemon
+ start
+ set +e
+ true
+ /usr/local/bin/grpc_health_probe -addr localhost:8500
time="2021-09-21T21:08:12Z" level=info msg="Storing process logs at path: /var/log/instances"
[longhorn-instance-manager] time="2021-09-21T21:08:12Z" level=info msg="Instance Manager listening to localhost:8500"
timeout: failed to connect service "localhost:8500" within 1s
+ [[ 2 -eq 0 ]]
+ sleep 1
+ true
+ /usr/local/bin/grpc_health_probe -addr localhost:8500
status: SERVING
+ [[ 0 -eq 0 ]]
+ echo longhorn instance manager is ready
+ break
+ set -e
longhorn instance manager is ready
+ tgtd -f
+ longhorn-instance-manager process create --name pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r --binary /usr/local/bin/longhorn --port-count 15 --port-args --listen,localhost: -- replica /volume/ --size 107374182400
+ tee /var/log/tgtd.log
tgtd: iser_ib_init(3431) Failed to initialize RDMA; load kernel modules?
tgtd: work_timer_start(146) use timer_fd based scheduler
tgtd: bs_init(387) use signalfd notification
[longhorn-instance-manager] time="2021-09-21T21:08:14Z" level=info msg="Process Manager: prepare to create process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r"
[longhorn-instance-manager] time="2021-09-21T21:08:14Z" level=info msg="Process Manager: created process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r"
{
"name": "pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r",
"binary": "/usr/local/bin/longhorn",
"args": [
"replica",
"/volume/",
"--size",
"107374182400",
"--listen",
"localhost:10000"
],
"portCount": 15,
"portArgs": [
"--listen,localhost:"
],
"processStatus": {
"state": "starting",
"errorMsg": "",
"portStart": 10000,
"portEnd": 10014
},
"deleted": false
}
+ sleep 5
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r] time="2021-09-21T21:08:14Z" level=info msg="Listening on data server localhost:10001"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r] time="2021-09-21T21:08:14Z" level=info msg="Listening on sync agent server localhost:10002"
time="2021-09-21T21:08:14Z" level=info msg="Listening on gRPC Replica server localhost:10000"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r] time="2021-09-21T21:08:14Z" level=info msg="Listening on sync localhost:10002"
[longhorn-instance-manager] time="2021-09-21T21:08:15Z" level=info msg="Process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r has started at localhost:10000"
+ longhorn-instance-manager process create --name pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e --binary /usr/local/bin/longhorn --port-count 1 --port-args --listen,localhost: -- controller pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0 --frontend tgt-blockdev --replica tcp://localhost:10000
[longhorn-instance-manager] time="2021-09-21T21:08:19Z" level=info msg="Process Manager: prepare to create process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e"
[longhorn-instance-manager] time="2021-09-21T21:08:19Z" level=info msg="Process Manager: created process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e"
{
"name": "pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e",
"binary": "/usr/local/bin/longhorn",
"args": [
"controller",
"pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0",
"--frontend",
"tgt-blockdev",
"--replica",
"tcp://localhost:10000",
"--listen",
"localhost:10015"
],
"portCount": 1,
"portArgs": [
"--listen,localhost:"
],
"processStatus": {
"state": "starting",
"errorMsg": "",
"portStart": 10015,
"portEnd": 10015
},
"deleted": false
}
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:19Z" level=info msg="Starting with replicas [\"tcp://localhost:10000\"]"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:19Z" level=info msg="Connecting to remote: localhost:10000"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:19Z" level=info msg="Opening: localhost:10000"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r] time="2021-09-21T21:08:19Z" level=info msg="New connection from: 127.0.0.1:41142"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-r] time="2021-09-21T21:08:19Z" level=info msg="Opening volume /volume/, size 107374182400/512"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:19Z" level=info msg="Adding backend: tcp://localhost:10000"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:19Z" level=info msg="Start monitoring tcp://localhost:10000"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:20Z" level=info msg="Get backend tcp://localhost:10000 revision counter 0"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:20Z" level=info msg="device pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0: SCSI device /dev/longhorn/pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0 shutdown"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] go-iscsi-helper: tgtd is already running
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:21Z" level=info msg="go-iscsi-helper: found available target id 1"
tgtd: device_mgmt(246) sz:119 params:path=/var/run/longhorn-pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0.sock,bstype=longhorn,bsopts=size=107374182400
tgtd: bs_thread_open(409) 16
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:21Z" level=info msg="New data socket connection established"
[longhorn-instance-manager] time="2021-09-21T21:08:21Z" level=info msg="wait for gRPC service of process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e to start at localhost:10015"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:21Z" level=info msg="default: automatically rescan all LUNs of all iscsi sessions"
[longhorn-instance-manager] time="2021-09-21T21:08:22Z" level=info msg="wait for gRPC service of process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e to start at localhost:10015"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:22Z" level=info msg="Creating device /dev/longhorn/pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0 8:16"
[pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e] time="2021-09-21T21:08:22Z" level=info msg="device pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0: SCSI device sdb created"
time="2021-09-21T21:08:22Z" level=info msg="Listening on gRPC Controller server: localhost:10015"
[longhorn-instance-manager] time="2021-09-21T21:08:23Z" level=info msg="wait for gRPC service of process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e to start at localhost:10015"
[longhorn-instance-manager] time="2021-09-21T21:08:23Z" level=info msg="Process pvc-2d3dae19-1388-430c-bb35-45111ad39e25-59a599f0-e has started at localhost:10015"
Check lsblk:
lsblk -f
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
loop0 squashfs 0 100% /snap/core
loop1 squashfs 0 100% /snap/core
loop2 squashfs 0 100% /snap/core
loop3 squashfs 0 100% /snap/lxd/
loop4 squashfs 0 100% /snap/snap
loop5 squashfs 0 100% /snap/snap
loop6 squashfs 0 100% /snap/lxd/
sda
├─sda1
├─sda2 ext4 697ff3f5-304c-4da5-ba74-2d3f06bcfb62 706.2M 21% /boot
└─sda3 LVM2_membe quWdv1-PMfv-g85T-ydec-kgd2-wLG6-xxUzBa
└─ubuntu--vg-ubuntu--lv
ext4 0045b8d2-c979-4d7c-9b92-e8371c1811d1 46.4G 72% /
sdb
sr0 iso9660 Ubuntu-Server 20.04.1 LTS amd64 2020-07-31-17-35-29-00
Step 6:
sudo mount /dev/sdb /apps/longhorn/
mount: /apps/longhorn: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.
Am I doing something wrong here?
There should be 3 replicas one on each host before I full hammed my rancher instance.
But rancher aside, it would be awesome to have a process to extract the data from the longhorn volumes directly.
This one deployment was an issue because the deployment wasn’t running when rancher went all ham, so I wasnt able to “docker cp” the content out of the pods.
thanks in advance
#saveMyBacon