I’d like to keep all of RancherOS running in RAM and never let it on my disk. Then, I’d like to mount /dev/sda so I can mount a volume from local disk into a Docker container. I’d like to boot RancherOS without persisting the state by setting:
rancher.state.dev=none
I don’t want to save RancherOS state with rancher.state.dev=LABEL=RANCHER_STATE because I never want to upgrade by running a command on a box. Instead, I just want to power cycle and PXE boot the latest and greatest RancherOS image.
I’d like to mount /dev/sda by placing this in cloud-config.yaml:
mounts:
- ["/dev/sda", “/mnt/persist”, “ext4”, “”]
So, what would be a good strategy to use, to have my drive auto formatted so I can mount it? Basically, I don’t want to reformat it if it’s already been formatted for use in my cloud. However, if it’s brand new, or if I had Ubuntu or Windows installed previously, I just need it to wipe everything and format so it can be mounted.
A disk label would probably work well here. Say you want to use the label STATE
, for example. If LABEL=STATE
is detected, mount it to /mnt/persist
. If not, format /dev/sda
with the the label (mkfs.ext4 -L STATE
).
You could write a short script in runcmd
to coordinate this process. I’d avoid using mounts
for this and just call the mount
command as part of the script.
If I boot RancherOS in memory and run these two commands:
sudo mkfs.ext4 -F -i 4096 -O ^64bit /dev/sda
sudo mount -t ext4 -o gid=1100,uid=1100 /dev/sda /mnt/persist
The mount command will return this error:
mount: mounting /dev/sda on /mnt/persist failed: Invalid argument
It works if I leave off “-o gid=1100,uid=1100”. So, what’s the correct way to mount a device as the “rancher” user?
I read that, “there are no uid options for ext[234]. If you want to change the permissions of the files, you have to use chown/chmod.” So, would I just use chown right after calling mount?
You don’t mount a local disk “as” a user. You mount it in general and the files inside the filesystem belong to whatever user writes them. Or you can use chown
to change the owner of an existing file.
I got this working; I can deploy a Cassandra cluster that stores data in /mnt/nest/cassandra on each of the hosts. It turns out that it worked without needing to change the group or user. Does this look about right to you, or would you suggest changing anything? I included ros config set mounts even though I don’t think it was needed, just for good measure.
#cloud-config
write_files:
- path: /etc/rc.local
permissions: "0755"
owner: root
content: |
#!/bin/bash
mkdir /mnt/nest
dev=$(blkid -L NEST)
status=$?
if [[ ${status} -eq 0 ]]; then
ros config set mounts "[[\"${dev}\",\"/mnt/nest\",\"ext4\",\"noatime\"]]"
mount -t ext4 -o noatime ${dev} /mnt/nest
elif [[ ${status} -eq 2 ]]; then
for dev in /dev/?da; do
if [[ -e ${dev} ]]; then
mkfs.ext4 -F -i 4096 -O ^64bit -L NEST ${dev}
if [[ $? -eq 0 ]]; then
ros config set mounts "[[\"${dev}\",\"/mnt/nest\",\"ext4\",\"noatime\"]]"
mount -t ext4 -o noatime ${dev} /mnt/nest
mkdir /mnt/nest/cassandra
fi
break
fi
done
fi