Unix sockets behaving weridly if volume mounted from Rancher container

I am trying to setup an rsyslog container which will aggregate logs from multiple containers.

I ran an rsyslog container, create a unix socket to read logs from and volume mounted that container onto the host.
I then used net cat to post log messages to the unix socket from the command line. Everything works fine.
I ran the exact same container with the same parameters from and my logs are not being rread by the rsyslog container.
Is there any difference between how Rancher handles volumes from stock docker?

1 Like

We don’t handle mounts any different from regular Docker but Unix socket do present some challenges with bind mounting. The short of it is that you must bind mount the directory containing the socket and not the socket itself. I know this seems contradictory because people bind mount /var/run/docker.sock but that socket is quite unique.

Are you bind mounting the directory or the socket.

1 Like

I was mounting the socket directly. Mounting the directory works much better.


1 Like

Would you please provide some more information on how to actually do this? I have confirmed that mounting the socket itself works in Docker but not in Rancher. The problem appears to be specifically due to how Rancher is handling the mount command.

My use case here is running Docker commands within a Jenkins instance via the Docker socket on the host machine. The following seems to work just fine

docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock --name jenkins-docker onesysadmin/jenkins-docker-executors

But duplicating this in Rancher via the GUI fails.