Convoy-Gluster causing write.lock issues with elasticsearch

I have been trying to start an elasticsearch service that has its data directory using a convoy-gluster volume. When the elasticsearch instance attempts to index to the kibana index it is failing because of a write.lock issue. I am only running one container for elasticsearch.

[2016-05-13 14:19:47,595][INFO ][cluster.metadata ] [Captain Savage] [.kibana] creating index, cause [api], templates , shards [1]/[1], mappings [config]
[2016-05-13 14:19:48,194][WARN ][index.engine ] [Captain Savage] [.kibana][0] failed engine [lucene commit failed]
org.apache.lucene.store.AlreadyClosedException: Underlying file changed by an external force at 2016-05-13T15:59:44.146661Z, (lock=NativeFSLock(path=/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],ctime=2016-05-13T15:59:44.146661Z))
at org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:179)
at org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
at org.apache.lucene.index.SegmentInfos.write(SegmentInfos.java:516)
at org.apache.lucene.index.SegmentInfos.prepareCommit(SegmentInfos.java:809)
at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4418)
at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2860)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2963)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2930)
at org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:1260)
at org.elasticsearch.index.engine.InternalEngine.commitIndexWriter(InternalEngine.java:1268)
at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:217)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:151)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1515)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1499)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:972)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:944)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

My Rancher is v1.0.1
My Docker is v1.10.3
Here is my Docker-Compose for elasticsearch:

elasticsearch:
  ports:
  - 9200:9200/tcp
  - 9300:9300/tcp
  labels:
    io.rancher.container.pull_image: always
  image: elasticsearch
  volumes:
  - elasticsearchdata:/usr/share/elasticsearch/data
  stdin_open: true
  volume_driver: convoy-gluster

If there is a way around this please let me know. Also, any alternative suggestions for storing the elasticsearch data would be appreciated.

Thanks