Container using NFS mount stucks

I’m new to rancher and convoy and don’t know what category it really belongs to.

Currently, I plan to deploy Alfresco & Postgres. Using the predefined package “Alfresco” from the library, I cannot configure it to use NFS as a backend storage. The postgres container keeps initializing and the volumes do not get mounted (in Infrastructure → Storage Pools)

Here’s what I did:
In the rancher web UI, from the menu Catalog → Library, I downloaded and installed convoy-nfs. I named it “nasnfs” and added two volumes to it, one for Alfresco and one for Postgres.

(When I click on the storage pools’ name, an empty page with just the header shows up. Is this supposed to be?)

.

First, I tried not starting services when installing and using the upgrade button on the service and add the volumes to the container. Didn’t work.
Second, I tried to copy and modify the yml-files, here’s the one I tried last:

docker-compose.yml

alfresco:
  environment:
    CIFS_ENABLED: 'false'
    FTP_ENABLED: 'false'
  labels:
    io.rancher.container.pull_image: always
  tty: true
  image: webcenter/rancher-alfresco:v5.1.0-2
  links:
  - postgres:db
  stdin_open: true
  ports:
  - 8080:8080/tcp

postgres:
  environment:
    PGDATA: /var/lib/postgresql/data/pgdata
    POSTGRES_DB: ${database_name}
    POSTGRES_PASSWORD: ${database_password}
    POSTGRES_USER: ${database_user}
  labels:
    io.rancher.container.pull_image: always
  tty: true
  image: postgres:9.4
  stdin_open: true
  volumes:
  - volPostgres:/var/lib/postgresql/data/pgdata
  volume_driver: nasnfs

rancher-compose.yml (I left this one unchanged):

.catalog:
  name: "Alfresco"
  version: "5.1.0"
  description: "Alfresco Electronic Document Management"
  uuid: alfresco-5.1.0-2
  minimum_rancher_version: v0.56.0
  questions:
    - variable: database_name
      description: "Name of the Alfresco database"
      label: "Database name"
      type: "string"
      required: true
      default: "alfresco"
    - variable: database_user
      description: "Login for the Alfresco database"
      label: "Database login"
      type: "string"
      required: true
      default: "alfresco"
    - variable: database_password
      description: "Password for the Alfresco database"
      label: "Database password"
      type: "string"
      required: true
      default: "alfresco"

alfresco:
  scale: 1
  health_check:
    port: 8080
    interval: 5000
    unhealthy_threshold: 3
    strategy: recreate
    healthy_threshold: 2
    response_timeout: 5000

postgres:
  scale: 1
  health_check:
    port: 5432
    interval: 5000
    unhealthy_threshold: 3
    strategy: recreate
    response_timeout: 5000
    healthy_threshold: 2

Looking at the logs of the convoy-agent container (the one running /launch volume-agent), this shows that the volume seems unknown:

time="2016-03-31T11:03:20Z" level=debug msg="Handle plugin activate: POST /Plugin.Activate" pkg=daemon
time="2016-03-31T11:03:20Z" level=debug msg="Response:  {\n\t\"Implements\": [\n\t\t\"VolumeDriver\"\n\t]\n}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Handle plugin get volume: POST /VolumeDriver.Get" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Request from docker: &{volPostgres map[]}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Response:  {\n\t\"Err\": \"Could not find volume volPostgres.\"\n}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Handle plugin get volume: POST /VolumeDriver.Get" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Request from docker: &{volPostgres map[]}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Response:  {\n\t\"Err\": \"Could not find volume volPostgres.\"\n}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Handle plugin create volume: POST /VolumeDriver.Create" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Request from docker: &{volPostgres map[]}" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg="Create a new volume volPostgres for docker" pkg=daemon
time="2016-03-31T11:03:50Z" level=debug msg= event=create object=volume opts=map[VolumeIOPS:0 PrepareForVM:false Size:0 BackupURL: VolumeName:volPostgres VolumeDriverID: VolumeType:] pkg=daemon reason=prepare volume=volPostgres
time="2016-03-31T11:03:50Z" level=debug msg="Response:  {\n\t\"Err\": \"Coudln't get flock. Error: open /var/lib/rancher/convoy/nasnfs-629f07b2-7336-4111-9fcf-8b6970a988c1/mnt/config/vfs_volume_volPostgres.json.lock: no such file or directory\"\n}" pkg=daemon

What am I doing wrong?

Here the versions I’m using:
convoy-nfs v0.3.0
Rancher v1.0.0
Cattle v0.159.2
User Interface v0.100.3
Rancher Compose v0.7.3

@TobiasSoltermann as far as I can tell, you aren’t doing anything wrong. On the host where you are getting this error (Coudln't get flock....), could you, as root, do this and tell me what you get:

ls /var/lib/rancher/convoy/nasnfs-629f07b2-7336-4111-9fcf-8b6970a988c1/mnt/
ls /var/lib/rancher/convoy/nasnfs-629f07b2-7336-4111-9fcf-8b6970a988c1/mnt/config/

Also, could you just run another test on the host to see if it happens consistently for all volumes. Run something like this:

docker run -itd -v test:/asdf --volume-drvier nasnfs ubuntu

Thanks for the response.

Running the ls commands on the host reporting the problem, the …/mnt folder actually contains config/, but config is empty. Looking at another host to which the storage pool is applied as well, the config folder even doesn’t exist.

Running the last command results in

root@containers01:~# docker run -itd -v test:/asdf --volume-driver nasnfs ubuntu
docker: Error response from daemon: create test: Coudln't get flock. Error: open /var/lib/rancher/convoy/nasnfs-629f07b2-7336-4111-9fcf-8b6970a988c1/mnt/config/vfs_volume_test.json.lock: no such file or directory.
See 'docker run --help'.

Same on the second host. Sure the file doesn’t exist. Isn’t the system supposed to create it? It should be able to do so because I can actually touch files in there.

Executing mount command doesn’t show up anything.

This makes me think of something else I gotta check first: I might have a broken nfs client.