About 6 months ago, maybe 7, I posted a question about mounting a disk volume for use with containers on RancherOS. The only working answer was to create a startup.sh file and have that manually mount any other disk volumes that happen to be on the box.
I have to say that is not a satisfying response. Operating Systems have been able to mount multiple disks without having to write custom startup scripts since the 1980’s. Even DOS could do this
What am I missing? The documentation hints that a cloud-config.yml operator “mounts” exists. I’ve tried several incantations without success. Here’s what I want to do: I want to mount /dev/vbd (that’s right there is no partition) onto /mnt/s on bootup. According to the docs, adding mounts: [[“vdb”,"/mnt/s",“ext4”,“defaults”]] to cloud-config.yml via ros config set is the way to do this. Even with RancherOS 0.5, the device does not get mounted. So I do this:
ros config set mounts ‘[["/dev/vbd","/mnt/s",“ext4”,“defaults”]]’
or this:
ros config set mounts ‘[[“vbd”,"/mnt/s",“ext4”,“defaults”]]’
and reboot. /mnt/s doesn’t have the device mounted (/mnt/s exists, it’s empty). The device is not mounted at all anywhere. Mounting it by hand mounts the device.
I’ve tried editing /etc/fstab, but of course that doesn’t work because it just gets replaced on startup. Not sure where else to turn (and avoid writing a script to mount the device).
I feel like I’m really missing something fundamental here. Surely there are other people who would like to mount more than the root volume onto a RancherOS based host for use by containers. How are they accomplishing this?
Thanks for you time, desperately trying to adopt RancherOS…
Hi Denise. Thanks for the response. Unfortunately, it does not help. As I said in my note, I’ve tried the mounts directive with no success. Perhaps you can give it another look and help me see where I’m making the mistake?
All I’m trying to do is mount a second disk so that multiple containers can use the second disk without having to write a custom script to run a mount command on bootup.
Are there any issues if you try performing the mount manually in the console container?
Would you mind trying to use an empty string rather than defaults as the last parameter for the cloud-config mount? That might be a concept specific to the mount command and not the syscall (which RancherOS uses internally).
Probably important to notice, that this seems not to work on 0.4.5:
[root@rancher conf]# ros os version
v0.4.5
[root@rancher conf]# ros config set mounts '[["/dev/vdb","/data2","ext4",""]]'
[root@rancher conf]# ros config get mounts
So it seems to not be able to set it
while under 0.5.0
[root@rancher rancher]# ros os version
v0.5.0
[root@rancher rancher]# ros config set mounts '[["/dev/vdb","/data2","ext4",""]]'
[root@rancher rancher]# ros config get mounts
- - /dev/vdb
- /data2
- ext4
- ""
@denise this is probably something for the docs, so people know when the feature has ben introduced
@EugenMayer The docs currently have been written to assume that you are running v0.5.0 as we introduced a lot of new features that don’t exist in previous releases. We hope our users to adopt the latest versions of RancherOS as we are constantly improving and stabilizing it.
We also don’t have the bandwidth to version the docs for RancherOS at this time. There are no plans on versioning RancherOS docs until GA.
@denise i understand that versioning docs is not a minimal time job. Just maybe a notice. I tried to get this feature working for over one hour. The issue is, you do not have any error message when seeing mounts in 0.4.5 and due to this, it could make sense, warning you user base.
Also this is not mentioned in the changelogs, only shared mounts are mentioned there (and resizefs)
A lot of other features introduced at least have an error or “missing” button making it easy to distinct.
Nevertheless, its not too critical either - maybe google will help one or the other
ros config set mounts '[["192.168.0.2:/mnt/volume01/data", "/mnt/data1", "nfs4", ""]]'
it just doesnt work, what am i missing, i crawled by now through the documentation that is available or cached 404 pages, along side closed github issues. does anyone have a pointer here?