How to setup /dev/shm?

I need access to a shared memory partition. Is there a recommended way of setting that up?

–Bill Noon

Hi Bill,

This StackOverflow page from April/March seems to describe some good approaches. Does it help you along at all?

-michael

Thanks Michael, but I think the problem is that the RancherOS somehow loses the /dev/shm link. This is what I see via sudo df -h:

Filesystem                Size      Used Available Use% Mounted on
overlay                 209.5G      2.7G    195.5G   1% /
tmpfs                    62.9G         0     62.9G   0% /dev
df: /dev/shm: No such file or directory
df: /dev/mqueue: No such file or directory
/dev/sde1               209.5G      2.7G    195.5G   1% /home
/dev/sde1               209.5G      2.7G    195.5G   1% /opt
none                     62.9G    160.0K     62.9G   0% /var/run
/dev/sde1               209.5G      2.7G    195.5G   1% /var/log
devtmpfs                 62.9G         0     62.9G   0% /host/dev
/dev/sde1               209.5G      2.7G    195.5G   1% /var/lib/system-docker
...

To give you more context, I am setting up a ceph cluster on rancheros using containerized ceph daemons.

When running the ceph commands, a ltt-ng error is triggered multiple times.

ceph health
libust[44/50]: Error: Error opening shm /lttng-ust-wait-5-0 (in get_wait_shm() at lttng-ust-comm.c:886)
libust[44/50]: Error: Error opening shm /lttng-ust-wait-5-0 (in get_wait_shm() at lttng-ust-comm.c:886)
libust[44/49]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886)
libust[44/49]: Error: Error opening shm /lttng-ust-wait-5 (in get_wait_shm() at lttng-ust-comm.c:886)

The cluster seems to work, but the logs are a mess.

This doesn’t happen when creating a ceph cluster in containers on ubuntu.

–Bill

Ok, continuing to debug this and got it working.

I ran the same ceph/demo container on a different rancheros host and it ran fine as well.

The difference is in a real ceph cluster I need to link the host /dev volume to get to the storage devices.

I am starting the ceph osd containers with:

docker run -d --privileged=true --net=host --name=osd \
  -v /opt/ceph/etc:/etc/ceph \
  -v /opt/ceph/var:/var/lib/ceph \
  -v /dev:/dev \
  ceph-daemon osd_kv

I was able to get things running better by modifying the entrypoint.sh script to check for /dev/shm and mount it if it doesn’t exist in the container.

–Bill

2 Likes

Hey Bill, can you share your Ceph containers’ configs? Preferably as a compose templates, but otherwise also fine.

Would be great to see a blog post on the topic, definitely want to see Ceph on RancherOS.

Cheers
; Ivan

Ivan, I don’t think that ceph in containers is ready yet. The biggest problem is that the mount namespace is private in docker 1.7. I can’t mount the rbd volumes in one container and link them in other containers. On ubuntu or redhat, I can mount them on the host, but in RancherOS, I can’t see how to do that, given the need for the ceph utilities.

There are a few issues and pull requests to get this fixed, but I don’t know whether they will make docker 1.8 or be added to rancher’s system-docker:



–Bill

1 Like