Convoy-gluster on specific storage hosts

Hi seb2411,
What I always do is the following:

$ dd if=/dev/zero of=test.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.4717 s, 730 MB/s

Sorry, it was a host on SSD that’s why it is so fast.
But in general you can see the throughput of storage very fast.

@ApolloDS ok, I will try this command.

@seb2411, just to be sure, don’t forget to delete test.dd after testing!
Otherwise you will have a big not used file lying around on your filesystem.

@samouds I was thinking about your Mysql problem. Did you try with Mysql 5.5 image. instead of the lastest ? I remember having a similar problem on my PC with a project on Docker. It was working perfectly with Mysql 5.5 but having similar troubles with the 5.6.

@ApolloDS So after runing my test :slightly_smiling:

On Vultr.com with the container on a Compute instance and the Gluster FS server on a local storage instance.

dd if=/dev/zero of=test.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 47.4925 s, 22.6 MB/s

@seb2411 it works now !

I did some little tests with wordpress/mysql to measure speed, and this is what I get :

  • Wordpress and mySQL with convoy : 10 sec to load the default homepage.
  • Wordpress with convoy, mySQL without : 10 sec too.
  • Wordpress without convoy, mySQL with convoy : 4 sec.
  • Wordpress and mySQL without convoy : 1 sec.

Why convoy is too slow … as I see it’s not advised to use it at production for now right ?

This is what the benchmark said :

`

dd if=/dev/zero of=test.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 155.418 s, 6.9 MB/s`

It’s too slow. how we can improve it ?

Definitely is not usable like that. What hosting are you using ? Mine seem a little bit better. I will investigate to see if we can have some solution to improve the performances.

I was thinking the limitation is possibly the network. All the VPS propose in general 100Mbps or 200Mbps.

If I’m not wrong :
200 Mbps = 25MB/S
so To send 1GO we need a minimum of 40s.

So we need a 4Gbps network to allow similar performances to a SSD.

Ok so checking on Vultr.com doc it seem the Private Network allow Gigabits network. So It can improve the performances a lot. I will try to setup the usage of the Gigabits private network instead the public one.

I’m using dedicated servers from soyoustart (ovh).

One question, when you create a new file for example using convoy/glusterFS it writes the file simultaneously on the three servers in one job or write it on the first one and the duplication in another one ?

Ok So testing again with the rancher-agent on the private network is not better:
root@722bdf64eceb:/testvolume# dd if=/dev/zero of=test.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 145.482 s, 7.4 MB/s
root@722bdf64eceb:/testvolume# dd if=/dev/zero of=test.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 114.637 s, 9.4 MB/s

root@722bdf64eceb:/testvolume# dd if=/dev/zero of=test2.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 115.445 s, 9.3 MB/s

root@722bdf64eceb:/testvolume# dd if=/dev/zero of=test3.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 95.5288 s, 11.2 MB/s

root@722bdf64eceb:/testvolume# dd if=/dev/zero of=test3.dd bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 115.8 s, 9.3 MB/s

I think is stille using the public network. So no improvement.

Thanks :slight_smile: So it’s not ready for production.

is there any better alternatives to convoy/glusterfs ?

Reading a little about GlusterFs, you need at the minimum a 1Gbps network between your host and better with 10Gbps.

I still waiting for the feature who allow us to specify to the container the network interface to use. So we can specify for example to all the convoy-gluster to communicate using the private network.

And possibly checking for different replication mode for Gluster. Currently the replication mode is working in synchronised only. So when you send a bit is saving first on the first server, and after on the second one and then on third one.

So currently the solution is not ready for production. And in fact, I don’t think it’s realistic to put your database on a glusterFs cluster. The better way to do is to have the Mysql data on the same host and so creating regular backup of the volume on GlusterFs. So in case of problem on your database you can restore it from GlusterFS.

For assets you can use GlusterFs + Cached solution. So you keep your asset on Gluster but don’t loose in performance.

And for the code no needs.

But with need improvement in term of performance first.

2 Likes

Hi all,
I want to add here that you can check your read performance with the previously created test.dd:

$ dd if=test.dd of=/dev/zero
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 2.56692 s, 418 MB/s

:slight_smile:

One interesting solution but work only by invite currently :


You can have a bare metal server with 2.5Gbits internal network for 12€/months.

Well, when we’re talking about solutions, FreeNAS 10 will have a S3 combatible solution and it’s free:

A number of new file sharing methods, complementing the traditional NFS /
SMB / iSCSI file sharing methods always offered by FreeNAS:

IPFS - The Inter-Planetary Filesystem (https://ipfs.io)

  • offering a global namespace and torrent-style file distribution
    method for content you choose to share with others (or vice-versa).

Riak CS (http://docs.basho.com/riakcs/latest/) - a distributed (clustering) database offering an Amazon S3-compatible Cloud storage API.

Swift and Gluster are NOT YET SUPPORTED in the ALPHA (but are coming)

I will definately test this.

Maybe Rancher is interested to integrate something like this into Convoy?
https://hub.docker.com/r/hectcastro/riak/

Yes but at the end I think one of the big limitation is still the bandwidth you have between your nodes in the cluster. You can have high performances solutions, if you have only a 200Mbits network interconnection it will be slow.

EDIT : thx for the links

So looks like convoy is slow but does anybody have ideas of using convoy with EBS? Same performance issues?

Gluster seem to be slow. NFS is working better with convoy. But it seem there are some problem in term of performances with IpSec.