GlusterFs : peer probe: failed: Probe returned with Transport endpoint is not connected

Hi,
i try to setup a GlusterFS stack to have a data Pool.
Hosts OS is ubuntu 15.10.

I’ve migrated recently from 0.63 to 1.0 (docker compose 0.7.3, docker 1.10.3) .
I’ve removed the glusterFS stack, and i’ve recreated one from the catalog (i’ve added the provileged flag according to the catalog entry documentation).
But containers were not starting automatically.
I’ve started manually GlusterFS containers, and all are in green state now.
But When i see logs, some problems are present :
02/04/2016 09:56:58 peer probe: failed: Probe returned with Transport endpoint is not connected 02/04/2016 09:57:04 Waiting for all service containers to start... 02/04/2016 09:57:04 Containers are starting... 02/04/2016 09:57:04 Waiting for Gluster Daemons to come up 02/04/2016 09:57:39 Waiting for all service containers to start... 02/04/2016 09:57:39 Containers are starting... 02/04/2016 09:57:39 Waiting for Gluster Daemons to come up 02/04/2016 09:58:10 gluster peer probe 10.42.9.180 02/04/2016 09:58:10 peer probe: failed: Probe returned with Transport endpoint is not connected 02/04/2016 09:58:16 Waiting for all service containers to start... 02/04/2016 09:58:16 Containers are starting... 02/04/2016 09:58:16 Waiting for Gluster Daemons to come up 02/04/2016 09:58:59 gluster peer probe 10.42.9.180 02/04/2016 09:58:59 peer probe: failed: Probe returned with Transport endpoint is not connected 02/04/2016 09:59:16 Waiting for all service containers to start... 02/04/2016 09:59:16 Containers are starting... 02/04/2016 09:59:16 Waiting for Gluster Daemons to come up 02/04/2016 09:59:52 gluster peer probe 10.42.9.180 02/04/2016 09:59:52 peer probe: failed: Probe returned with Transport endpoint is not connected
Is there any command to have a better insight on what’s going on?
Note that when i remove the old glusterFS stack, the ‘convoy-gluster’ datapool remains.

Charles.
ps: i create the glusterFS stack from the rancher-compose CLI. here is the docker-compose.yml file used

`glusterfs-server:
image: rancher/glusterfs:v0.2.0
volumes_from:
- glusterfs-data
labels:
io.rancher.container.hostname_override: container_name
io.rancher.sidekicks: glusterfs-daemon,glusterfs-data,glusterfs-volume-create
io.rancher.scheduler.affinity:host_label: storage=big
io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=$${stack_name}/$${service_name}
privileged: true
command: /opt/rancher/peerprobe.sh container:glusterfs-server
glusterfs-daemon:
image: rancher/glusterfs:v0.2.0
net: container:glusterfs-server
cap_add:
- SYS_ADMIN
volumes_from:
- glusterfs-data
labels:
io.rancher.container.dns: true
io.rancher.container.network: true
io.rancher.container.hostname_override: container_name
io.rancher.scheduler.affinity:container_label_soft_ne: io.rancher.stack_service.name=$${stack_name}/glusterfs-server
privileged: true
command: "glusterd -p /var/run/gluster.pid -N"
glusterfs-volume-create:
image: rancher/glusterfs:v0.2.0
command: /opt/rancher/replicated_volume_create.sh container:glusterfs-server
net: 'container:glusterfs-server’
volumes_from:
- glusterfs-data
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
privileged: true

WARNING – DO NOT CHANGE ANYTHING BELOW

DATA LOSS!!! CAN OCCURR DURING UPGRADE

glusterfs-data:
image: rancher/glusterfs:v0.1.3
command: /bin/true
volumes:
- /var/run
labels:
io.rancher.container.hostname_override: container_name
io.rancher.container.start_once: true
privileged: true

`

my issue seems related to https://github.com/rancher/rancher/issues/3670 according to some comments published on this issue.

Charles.