ebishop
December 15, 2015, 10:29pm
1
Istarted up glusterfs from the catalog, and it seemed to start up just fine.
Nothing suspicious in the container logs, and all green in the UI.
I have 3 VMs and basically 3 of everything started by glusterfs and convoy.
But when I try to start up a jenkins using a named mount (as the video suggests) it
just spins on “Activating”.
Then, after a long while……
The app was “removed” and I got this error:
Error looking up volume plugin convoy: Plugin not found.
Do I need to install a convoy plugin on the hosts?
denise
December 15, 2015, 10:48pm
2
Did you install the convoy-gluster
catalog item to be used as a volume driver?
When deploying the convoy-gluster
catalog item, please make sure that the hosts have the host label set up as part of the configuraiton options in the template.
You can read the step by step example in our docs.
http://docs.rancher.com/rancher/rancher-services/storage-service/#example-using-glusterfs
ebishop
December 16, 2015, 1:39am
3
I mostly followed the instructions. The default label in the catalog is convoy.glusterfs=true, so that’s what I used. I think the part I was missing was the following:
In the Advanced Options -> Volumes tab, the Volume Driver will be the name of the storage pool that was created.
in any case, that got me a little farther. But I’m getting permission errors from Jenkins when it tries to write to a log:
12/15/2015 7:36:21 PM/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied12/15/2015 7:36:36 PM/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied12/15/2015 7:36:42 PM/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied12/15/2015 7:36:58 PM/usr/local/bin/jenkins.sh: line 25: /var/jenkins_home/copy_reference_file.log: Permission denied
denise
December 16, 2015, 4:43am
4
What’s the volume name that you are trying to pass in? I’ll try it out myself.
Would you be willing to share your docker-compose.yml of the jenkins service so I can look into it?
Note: I’ve only ever launched the jenkins template from our catalog service.
ebishop
December 16, 2015, 5:42pm
5
Here’s
my docker-compose.yml file
edjenkins:
name: edjenkins
image: jenkins:latest
ports:
- 8080:8080
labels:
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
volume_driver: convoy-gluster
volumes:
- jdata:/var/jenkins_home
But I’d rather use the Jenkins catalog with convoy, so please point me to an
example of that if you have o
ebishop
December 16, 2015, 5:55pm
6
I tried loading Ghost like the video, and it seems to work, btw. But I don’t know how to set up ghost, so I can’t actually do anything with it.
Do you have a suggestion for another app I can run to verify that the convoy-glusterfs is working properly?
ebishop
December 16, 2015, 7:29pm
7
I took a closer look at the video. http:///ghost/setup
So Ghost is working for me now…
denise
December 18, 2015, 8:31pm
8
We don’t have a way to be able to launch the Jenkins catalog to use convoy-gluster at this point. It seems like some additional configuration would need to happen.
But are you still having issues trying to use convoy-gluster? I don’t know if I’ll be able to help you with getting jenkins set up with it, but I’d like to make sure you are able to use the volume plug in.
ebishop
December 19, 2015, 10:13pm
9
Gluster and gluster/convoy are working for me now. Thanks for asking.
bscott
January 5, 2016, 8:03pm
10
What was the final solution?
I tried using “convoy-gluster” ( name of the storage pool), still get the error that the plugin can’t be found.
I just wanted to demo glusterfs working, so I used ghost instead of Jenkins. I have not tried to get jenkins working, since ghost worked for me.
bscott
January 5, 2016, 8:27pm
12
I’m just trying to get Gluster+Convoy working but i see errors in the Convoy+Gluster logs.
1/5/2016 12:22:09 PMWaiting for metadata.time="2016-01-05T20:22:09Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/30160/ns/mnt -F -- /var/lib/docker/aufs/mnt/aea9b341b4591c54d96a6098205f9502b80bb989c215783af128fb602323ed4e/var/lib/rancher/convoy-agent/share-mnt --stage2 /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a -- /launch volume-agent-glusterfs-internal]"
1/5/2016 12:22:09 PMRegistering convoy socket at /var/run/conoy-convoy-gluster.sock
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=info msg="Got: driver-opts [glusterfs.defaultvolumepool=my_vol glusterfs.servers=glusterfs]"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=info msg="Got: root /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=info msg="Got: drivers [glusterfs]"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=info msg="Launching convoy with args: [--socket=/host/var/run/conoy-convoy-gluster.sock daemon --driver-opts=glusterfs.defaultvolumepool=my_vol --driver-opts=glusterfs.servers=glusterfs --root=/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a --drivers=glusterfs]"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=debug msg="Creating config at /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a" pkg=daemon
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=debug msg= driver=glusterfs driver_opts=map[glusterfs.defaultvolumepool:my_vol glusterfs.servers:glusterfs] event=init pkg=daemon reason=prepare root="/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:09 PMtime="2016-01-05T20:22:09Z" level=debug msg="Volume my_vol is being mounted it to /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol, with option [-t glusterfs]" pkg=util
1/5/2016 12:22:10 PMtime="2016-01-05T20:22:10Z" level=debug msg="Cleaning up environment..." pkg=daemon
1/5/2016 12:22:10 PMtime="2016-01-05T20:22:10Z" level=error msg="Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:10 PM{
1/5/2016 12:22:10 PM "Error": "Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:10 PM}
1/5/2016 12:22:10 PMtime="2016-01-05T20:22:10Z" level=info msg="convoy exited with error: exit status 1"
1/5/2016 12:22:10 PMtime="2016-01-05T20:22:10Z" level=info msg=Exiting.
1/5/2016 12:22:17 PMWaiting for metadata.time="2016-01-05T20:22:17Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/30160/ns/mnt -F -- /var/lib/docker/aufs/mnt/aea9b341b4591c54d96a6098205f9502b80bb989c215783af128fb602323ed4e/var/lib/rancher/convoy-agent/share-mnt --stage2 /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a -- /launch volume-agent-glusterfs-internal]"
1/5/2016 12:22:17 PMRegistering convoy socket at /var/run/conoy-convoy-gluster.sock
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=info msg="Got: root /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=info msg="Got: drivers [glusterfs]"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=info msg="Got: driver-opts [glusterfs.defaultvolumepool=my_vol glusterfs.servers=glusterfs]"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=info msg="Launching convoy with args: [--socket=/host/var/run/conoy-convoy-gluster.sock daemon --root=/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a --drivers=glusterfs --driver-opts=glusterfs.defaultvolumepool=my_vol --driver-opts=glusterfs.servers=glusterfs]"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=debug msg="Creating config at /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a" pkg=daemon
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=debug msg= driver=glusterfs driver_opts=map[glusterfs.defaultvolumepool:my_vol glusterfs.servers:glusterfs] event=init pkg=daemon reason=prepare root="/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:17 PMtime="2016-01-05T20:22:17Z" level=debug msg="Volume my_vol is being mounted it to /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol, with option [-t glusterfs]" pkg=util
1/5/2016 12:22:18 PMtime="2016-01-05T20:22:18Z" level=debug msg="Cleaning up environment..." pkg=daemon
1/5/2016 12:22:18 PMtime="2016-01-05T20:22:18Z" level=error msg="Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:18 PM{
1/5/2016 12:22:18 PM "Error": "Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:18 PM}
1/5/2016 12:22:18 PMtime="2016-01-05T20:22:18Z" level=info msg="convoy exited with error: exit status 1"
1/5/2016 12:22:18 PMtime="2016-01-05T20:22:18Z" level=info msg=Exiting.
1/5/2016 12:22:24 PMWaiting for metadata.time="2016-01-05T20:22:24Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/30160/ns/mnt -F -- /var/lib/docker/aufs/mnt/aea9b341b4591c54d96a6098205f9502b80bb989c215783af128fb602323ed4e/var/lib/rancher/convoy-agent/share-mnt --stage2 /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a -- /launch volume-agent-glusterfs-internal]"
1/5/2016 12:22:24 PMRegistering convoy socket at /var/run/conoy-convoy-gluster.sock
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=info msg="Got: drivers [glusterfs]"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=info msg="Got: driver-opts [glusterfs.defaultvolumepool=my_vol glusterfs.servers=glusterfs]"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=info msg="Got: root /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=info msg="Launching convoy with args: [--socket=/host/var/run/conoy-convoy-gluster.sock daemon --drivers=glusterfs --driver-opts=glusterfs.defaultvolumepool=my_vol --driver-opts=glusterfs.servers=glusterfs --root=/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a]"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=debug msg="Creating config at /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a" pkg=daemon
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=debug msg= driver=glusterfs driver_opts=map[glusterfs.defaultvolumepool:my_vol glusterfs.servers:glusterfs] event=init pkg=daemon reason=prepare root="/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:24 PMtime="2016-01-05T20:22:24Z" level=debug msg="Volume my_vol is being mounted it to /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol, with option [-t glusterfs]" pkg=util
1/5/2016 12:22:25 PMtime="2016-01-05T20:22:25Z" level=debug msg="Cleaning up environment..." pkg=daemon
1/5/2016 12:22:25 PMtime="2016-01-05T20:22:25Z" level=error msg="Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:25 PM{
1/5/2016 12:22:25 PM "Error": "Failed to execute: mount [-t glusterfs glusterfs:/my_vol /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol], output Mount failed. Please check the log file for more details.\n, error exit status 1"
1/5/2016 12:22:25 PM}
1/5/2016 12:22:25 PMtime="2016-01-05T20:22:25Z" level=info msg="convoy exited with error: exit status 1"
1/5/2016 12:22:25 PMtime="2016-01-05T20:22:25Z" level=info msg=Exiting.
1/5/2016 12:22:55 PMWaiting for metadata.time="2016-01-05T20:22:55Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/30160/ns/mnt -F -- /var/lib/docker/aufs/mnt/aea9b341b4591c54d96a6098205f9502b80bb989c215783af128fb602323ed4e/var/lib/rancher/convoy-agent/share-mnt --stage2 /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a -- /launch volume-agent-glusterfs-internal]"
1/5/2016 12:22:55 PMRegistering convoy socket at /var/run/conoy-convoy-gluster.sock
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=info msg="Got: drivers [glusterfs]"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=info msg="Got: driver-opts [glusterfs.defaultvolumepool=my_vol glusterfs.servers=glusterfs]"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=info msg="Got: root /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=info msg="Launching convoy with args: [--socket=/host/var/run/conoy-convoy-gluster.sock daemon --drivers=glusterfs --driver-opts=glusterfs.defaultvolumepool=my_vol --driver-opts=glusterfs.servers=glusterfs --root=/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a]"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=debug msg="Creating config at /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a" pkg=daemon
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=debug msg= driver=glusterfs driver_opts=map[glusterfs.defaultvolumepool:my_vol glusterfs.servers:glusterfs] event=init pkg=daemon reason=prepare root="/var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a"
1/5/2016 12:22:55 PMtime="2016-01-05T20:22:55Z" level=debug msg="Volume my_vol is being mounted it to /var/lib/rancher/convoy/convoy-gluster-6114a82e-ad91-4c86-aa45-cd089502978a/glusterfs/mounts/my_vol, with option [-t glusterfs]" pkg=util
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg= driver=glusterfs event=init pkg=daemon reason=complete
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /info" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /uuid" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /volumes/list" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /volumes/" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /snapshots/" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /backups/list" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering GET, /backups/inspect" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering POST, /volumes/create" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering POST, /volumes/mount" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering POST, /volumes/umount" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering POST, /snapshots/create" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering POST, /backups/create" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering DELETE, /backups" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering DELETE, /volumes/" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering DELETE, /snapshots/" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /Plugin.Activate" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /VolumeDriver.Create" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /VolumeDriver.Remove" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /VolumeDriver.Mount" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /VolumeDriver.Unmount" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=debug msg="Registering plugin handler POST, /VolumeDriver.Path" pkg=daemon
1/5/2016 12:22:56 PMtime="2016-01-05T20:22:56Z" level=warning msg="Remove previous sockfile at /host/var/run/conoy-convoy-gluster.sock" pk
bscott
January 5, 2016, 8:45pm
13
Seems that the convoy agent container isn’t registering the convoy driver.
bscott
January 5, 2016, 9:53pm
14
I’ve made progress but getting this:
ebishop
January 5, 2016, 10:39pm
15
I did a quick google for “/var/jenkins_home/copy_reference_file.log: Permission denied” and found this:
So maybe is an SELinux related issue, or the fact that Jenkins runs as the jenkins user.
1 Like
denise
January 6, 2016, 7:23pm
16
@bscott If you had a very early version of convoy-gluster launched using an old template, you might be facing this issue for your “convoy agent isn’t registering the convoy-driver”
opened 10:02PM - 03 Dec 15 UTC
closed 09:33PM - 13 Jun 16 UTC
Recreating issue from #2893. Probably related to #2796, but I am trying with different storage-pool and volume names.
I was recreating gluster...
area/storage
kind/bug
There are some cleanup details in the issue but not sure if it’s completely resolved for the user reporting it.