Convoy for Managing Database Data Volumes

Just wondering if anyone has used Convoy for managing database volumes on Rancher. How did that workout? Which driver did you end up choosing?

I’m looking into various ways to do Volume Management on Cattle, so far the easiest way for data is to use the volume mounts on a specific host, and use host labels on the docker-compose.yml. Though this is without Convoy.

As for Convoy I was wondering if these understanding of the various drivers are correct?

The Device Mapper Driver is 2 block mounted devices hosted on the convoy host. Then it backs up and replicates the volumes to various hosts managed by Rancher? Or are these Device Mappers per host?

This requires that your instance be hosted on EC2, and the convoy agent will attach EBS snapshots to the instance and link it to the Container dynamically. Though since EBS can only be attached to instances one at a time, it seems like this would be slow if we need to maintain multiple EBS volumes on the same EC2 instance?

This looks like an extension on top of the DeviceMapper, but bypasses the setup of block stores and just stores items on the NFS? The volume creation is the same as with DeviceMapper, except that the drives just come from a NFS server.


  • Can convoy use both NFS and DeviceMapper? Or is it tied to one driver at a time?
  • For NFS and DeviceMapper convoy data drivers, do they clone the data volumes on the local host and back them up? Or is there a persistent network drive being mounted onto the docker container?
  • If it is a persistent network drive, how would the latency affect the data storage, is it even worth it at that point due to having another operational pitfall on the data storage layer?
  • convoy can use more than one driver at a time. i am uzing nfs and ebs
  • persistent nfs drive mounted into container.
  • haven’t seen problems with latency so far. from the other hand i am storing files, not database.

Yes there is some latency to deal with but for most of our instances it’s just fine using convoy-NFS. You do need to make sure the NFS path is short and on a fast network though.

In our case everything is network as we’re running a SAN type solution so all persistent volumes (whether Docker or not) are either NFS or iSCSI. We’re starting to use the vendor (Nutanix) provided iSCSI volume driver but most are NFS.

Thanks for sharing!

Just wondering if you had seen this scenario.

For the EBS, does convoy handle the container migrating to different hosts relatively well?
Say for the following EBS volume on some High IOPS ebs.

  build: ./docks/psql
    - 5432
    - db_vol:/var/lib/postgresql/data
  volume_driver: convoy

If the host that the container on was killed, does convoy just move the EBS to the next host that the container is scheduled?

And if we have 2 containers of postgres, would that end up conflicting?

if you destroy your original ec2 instance (or autoscaling destroy it for you) and your ebs volume is marked as “available” in aws, then mounting it to another ec2 instance with docker and reschedulling the container is fairly easy. you can even do it with docker command. you have to know volume id of course.

docker volume create --driver convoy --opt driver=ebs --opt id=vol-12345 --name myvolume

new ec2 instance have to be in the same availability zone though

sorry, i have no comments regarding 2 containers of postgres. haven’t tried it.

Ahh, it looks like you guys manage you own infrastructure in a large enterprise network? Neat.

So with the SAN solution I guess the precedent for databases on network drives work if you can guarantee the QoS on the network. We’re a small business so we are still on AWS for everything, but it seems like the we won’t be hitting that use case anytime soon.

The EBS makes more sense then

oh okay, I’ll test it out and report back.

Convoy doesn’t do the magic block store migrations it seems, but its still a lot better than just host mounts!

Looks like with the name db_vol convoy just creates 2 volumes tagged with that name.

It looks like for EBS the names are not unique, and we will have to explicitly specify a volumeID