Just wondering if anyone has used Convoy for managing database volumes on Rancher. How did that workout? Which driver did you end up choosing?
I’m looking into various ways to do Volume Management on Cattle, so far the easiest way for data is to use the volume mounts on a specific host, and use host labels on the docker-compose.yml. Though this is without Convoy.
As for Convoy I was wondering if these understanding of the various drivers are correct?
DeviceMapper:
The Device Mapper Driver is 2 block mounted devices hosted on the convoy host. Then it backs up and replicates the volumes to various hosts managed by Rancher? Or are these Device Mappers per host?
EBS
This requires that your instance be hosted on EC2, and the convoy agent will attach EBS snapshots to the instance and link it to the Container dynamically. Though since EBS can only be attached to instances one at a time, it seems like this would be slow if we need to maintain multiple EBS volumes on the same EC2 instance?
NFS/EFS
This looks like an extension on top of the DeviceMapper, but bypasses the setup of block stores and just stores items on the NFS? The volume creation is the same as with DeviceMapper, except that the drives just come from a NFS server.
Questions
Can convoy use both NFS and DeviceMapper? Or is it tied to one driver at a time?
For NFS and DeviceMapper convoy data drivers, do they clone the data volumes on the local host and back them up? Or is there a persistent network drive being mounted onto the docker container?
If it is a persistent network drive, how would the latency affect the data storage, is it even worth it at that point due to having another operational pitfall on the data storage layer?
Yes there is some latency to deal with but for most of our instances it’s just fine using convoy-NFS. You do need to make sure the NFS path is short and on a fast network though.
In our case everything is network as we’re running a SAN type solution so all persistent volumes (whether Docker or not) are either NFS or iSCSI. We’re starting to use the vendor (Nutanix) provided iSCSI volume driver but most are NFS.
if you destroy your original ec2 instance (or autoscaling destroy it for you) and your ebs volume is marked as “available” in aws, then mounting it to another ec2 instance with docker and reschedulling the container is fairly easy. you can even do it with docker command. you have to know volume id of course.
Ahh, it looks like you guys manage you own infrastructure in a large enterprise network? Neat.
So with the SAN solution I guess the precedent for databases on network drives work if you can guarantee the QoS on the network. We’re a small business so we are still on AWS for everything, but it seems like the we won’t be hitting that use case anytime soon.