How import data to a gluster-convoy volume first time

Hi!

I’m lately trying to do a setup in Rancher with GlusterFS and gluster-convoy. I’ve been doing some research in the forums but I can’t see anything relevant to help me with this issue.

So, I’m having a small trouble trying to save information of a few sites. I have created 3 volumes, one for the DB, one for the Solr indexes and another for the private website files. If I start them without gluster-convoy volume hooked in, it works fine.

However, If I attach a new shared datavolume from gluster-convoy to the container’s volume (already existing), the site has tons of problems because the existing private files disappeared, and same happens with the database one.

Seems that syncing with a convoy folder it just erases all the data in those folders and adds the data of the convoy volume. Is there any way that at start, if the convoy volume is empty, the convoy volume would copy from the already filled volume in the container?

The only idea is to change the image so the mounted volume is empty and I load info at start, but this makes everything much harder and I have to refactor many images. As well, as if I want to add an existing running container, I wouldn’t be able at all!

If I fixed this I’d have everything working here :slight_smile:

Thanks!

@sturgelose how are you attaching a new shared datavolume from convoy-gluster to the container’s volume? Are you starting a new container?

Unfortunately, if the data you have is in an existing volumes and you want to migrate it to a convoy-gluster volume, I think you’re going to have to write a migration script/container. I think the idea would be to create a container that mounts both the normal volume and a gluster volume and then in the container, cp or rsync from the directory where the old volume is mounted to the directory where the convoy-gluster volume is mounted. Once that was done, you could then mount the convoy-gluster volume into a new instance of the website container(s).

Also, FWIW, glusterfs is not really the most appropriate storage technology for a database to run on top of. You’d be better of using local disk and then using a known and proven mysql clustering or replication technology.

Ok, thanks!

That was mostly the issue. I also figured it out in the docker documentation about how all the mounts work. I was trying to learn how to scale the nodes, as the volumes that they have need to be shared or both sites will have different items. So I needed either this GlusterFS or NFS in order to fix the issue.

And, yes I know that this is not useful for databases. This was just a plan to setup backups of volumes. :wink: