I am using this stack with my own elk image
version: '2'
services:
LB-myApp-Elk:
image: rancher/lb-service-haproxy:v0.7.9
ports:
- 9200:9200/tcp
- 5601:5601/tcp
labels:
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: 'true'
myApp-ELK:
image: registry.server.com/bizmate/elk_myApp
stdin_open: true
tty: true
volumes:
- myApp-elk-data:/var/lib/elasticsearch
- myApp-log-lake:/tmp/logs
- myApp-elk-output:/tmp/output
ports:
- 5601/tcp
- 9200/tcp
- 5044/tcp
labels:
io.rancher.container.pull_image: always
volumes:
myApp-elk-data:
external: true
driver: rancher-nfs
myApp-log-lake:
external: true
driver: rancher-nfs
myApp-elk-output:
external: true
driver: rancher-nfs
Last time i had to destroy the stack and rm all files in the elk-data volume or elastic search was failing writing to the index and use it when restarted.
Now I am also getting similar issues with the output data where the folder is empty and logstash cannot somehow write to it .
01/12/2017 00:53:59==> /var/log/logstash/logstash-plain.log <==
01/12/2017 00:53:59[2017-12-01T00:53:59,134][INFO ][logstash.outputs.csv ] Opening file {:path=>"/tmp/output/webHookHelperBaseUrl.csv"}
01/12/2017 00:53:59[2017-12-01T00:53:59,153][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<Errno::EACCES: Permission denied - /tmp/output/webHookHelperBaseUrl.csv>, :backtrace=>["org/jruby/RubyFile.java:370:in `initialize'", "org/jruby/RubyIO.java:871:in `new'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.1/lib/logstash/outputs/file.rb:280:in `open'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.1/lib/logstash/outputs/file.rb:132:in `multi_receive_encoded'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.1/lib/logstash/outputs/file.rb:131:in `multi_receive_encoded'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-file-4.0.1/lib/logstash/outputs/file.rb:130:in `multi_receive_encoded'", "/opt/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in `multi_receive'", "/opt/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:in `multi_receive'", "/opt/logstash/logstash-core/lib/logstash/output_delegator.rb:47:in `multi_receive'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:407:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:406:in `output_batch'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:352:in `worker_loop'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:317:in `start_workers'"]}
I am very surprised about this behaviour because the containers are the only processes consuming these volumes and they write using their own UIDs correctly when they can, so why do i get these permission errors? Ideas? I dont get the same problem if i run this stack on my machine with just docker-compose. I am unsure if NFS and rancher driver make the whole thing more complicated?