ELK Stack in catalog not working

So I setup all the most recent catalog version of the ELK stack and my logs are coming in with _jsonparsefailure as the tag and the actual message isn’t being parsed.

ElasticSearch 2.x – 2.2.2-rancher1
LogStash 1.5.6-1-rancher1
LogSpout 0.2.0-1
Kibana 4.4.2-rancher1

I did this on a different cluster a while back and it worked fine. What has changed to make the log messages this strange format? And obviously how can I fix this

Thanks

Any help would be greatly appreciated…

I’m getting a TON of 400 errors from Logstash Indexer trying to send the logs to ElasticSearch

eg:
failed action with response of 400, dropping action: ["index", {:_id=>nil, :_index=>"logstash-2016.06.10", :_type=>"logs", :_routing=>nil}, #<LogStash::Event:0x1a1bc35d @metadata_accessors=#<LogStash::Util::Accessors:0x4644f11 @store={"retry_count"=>0}, @lut={}>, @cancelled=false, @data={"message"=>"\tat org.elastic

There was some kind of bug with rancher-compose (how catalog entries are launched) regarding log drivers that was fixed in v1.1.0-dev5.

Could you try again with v1.1.0-dev5?

I upgraded to rancher 1.1.0-dev5 and I’m still getting the same issue with the latest (fresh install) versions of each component in the Catalog.

LogStash Indexer:

6/22/2016 9:38:13 AMe[33mfailed action with response of 400, dropping action: ["index", {:_id=>nil, :_index=>"logstash-2016.06.22", :_type=>"logs", :_routing=>nil}, #<LogStash::Event:0x19df778b @metadata_accessors=#<LogStash::Util::Accessors:0x227cf1da @store={"retry_count"=>0}, @lut={}>, @cancelled=false, @data={"message"=>"\tat org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)", "docker.name"=>"/r-elasticsearch2_elasticsearch-masters_elasticsearch-base-master_1", "docker.id"=>"c80422cb77c3a10bf99fbdf8c9cfef7a0bcd1d08c4d3142774090969b204c91d", "docker.image"=>"elasticsearch:2.3.3", "docker.hostname"=>"elasticsearch2_elasticsearch-masters_1", "@version"=>"1", "@timestamp"=>"2016-06-22T13:36:41.393Z", "host"=>"10.42.125.127"}, @metadata={"retry_count"=>0}, @accessors=#<LogStash::Util::Accessors:0x9d2f2ef @store={"message"=>"\tat org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)", "docker.name"=>"/r-elasticsearch2_elasticsearch-masters_elasticsearch-base-master_1", "docker.id"=>"c80422cb77c3a10bf99fbdf8c9cfef7a0bcd1d08c4d3142774090969b204c91d", "docker.image"=>"elasticsearch:2.3.3", "docker.hostname"=>"elasticsearch2_elasticsearch-masters_1", "@version"=>"1", "@timestamp"=>"2016-06-22T13:36:41.393Z", "host"=>"10.42.125.127"}, @lut={"type"=>[{"message"=>"\tat org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)", "docker.name"=>"/r-elasticsearch2_elasticsearch-masters_elasticsearch-base-master_1", "docker.id"=>"c80422cb77c3a10bf99fbdf8c9cfef7a0bcd1d08c4d3142774090969b204c91d", "docker.image"=>"elasticsearch:2.3.3", "docker.hostname"=>"elasticsearch2_elasticsearch-masters_1", "@version"=>"1", "@timestamp"=>"2016-06-22T13:36:41.393Z", "host"=>"10.42.125.127"}, "type"]}>>] {:level=>:warn}e[0m

Thanks.

I can’t reproduce the issue with the build-master v0.20.0 image, the ELK stack seems to be working correctly and there is no errors produced in the LogStash Indexer, the next version will be 1.1.0 which should be out shortly.

Still getting these errors… Can we please find a solution?

@qrpike Are you still having this issue?

I had this issue and I tracked it down the my logspout/logstash images not playing nice with the each other.

Here is what I did to fix it.

I replaced my logspout image with the docker compose file below. (I just replaced the rancher image with a community supported image of logspout-logstash. Of course there are not as many modules installed as the rancher one but it pushes to logstash over udp which is all I need at the moment.

Try this first

Delete your logspout stack > create a new one with the name logspout > insert the docker/rancher-compose.ymls below

docker-compose.yml

logspout:
  restart: always
  environment:
    ROUTE_URIS: 'logstash://logstash:5000'
    LOGSPOUT: 'ignore'
  volumes:
  - '/var/run/docker.sock:/var/run/docker.sock'
  external_links:
  - logstash/logstash-collector:logstash
  labels:
    io.rancher.scheduler.global: 'true'
    io.rancher.container.hostname_override: container_name
  tty: true
  image: amouat/logspout-logstash:latest
  stdin_open: true

rancher-compose.yml

{}

Test your indexer. (I just restarted my redis and indexer instance because all the logs had backed up in redis and they keep blowing up in the indexer logs)

If this solved your issue then Great stop here. If you see a new error that you have never seen before try the below.

Upgrade the Logstash instances to version 2.4.

Here are my docker-compose and rancher-compose files for the logstash upgrade.

Take note the only real change to the files are the image for the collector and indexer to image: logstash:2.4

Also I had to change the elasticsearch output in the rancher-compose file to account for breaking change introduced in logstash 2.0. (just change host to hosts and I deleted some extra fluff that is now default values in v2.0 and greater ) You can read more about the breaking changes here: https://www.elastic.co/guide/en/logstash/2.4/breaking-changes.html

docker-compose.yml

logstash-indexer-config:
  restart: always
  image: rancher/logstash-config:v0.2.0
  labels:
    io.rancher.container.hostname_override: container_name
redis:
  restart: always
  tty: true
  image: redis:3.0.3
  stdin_open: true
  labels:
    io.rancher.container.hostname_override: container_name
logstash-indexer:
  restart: always
  tty: true
  volumes_from:
  - logstash-indexer-config
  command:
  - logstash
  - -f
  - /etc/logstash
  image: logstash:2.4
  links:
  - redis:redis
  external_links:
  - es/elasticsearch-clients:elasticsearch
  stdin_open: true
  labels:
    io.rancher.sidekicks: logstash-indexer-config
    io.rancher.container.hostname_override: container_name
logstash-collector-config:
  restart: always
  image: rancher/logstash-config:v0.2.0
  labels:
    io.rancher.container.hostname_override: container_name
logstash-collector:
  restart: always
  tty: true
  links:
  - redis:redis
  ports:
  - "5000/udp"
  volumes_from:
  - logstash-collector-config
  command:
  - logstash
  - -f
  - /etc/logstash
  image: logstash:2.4
  stdin_open: true
  labels:
    io.rancher.sidekicks: logstash-collector-config
    io.rancher.container.hostname_override: container_name

rancher-compose.yml

logstash-indexer:
  metadata:
    logstash:
      inputs: |
        redis {
          host => "redis"
          port => "6379"
          data_type => "list"
          key => "logstash"
        }
      filters: |
        if [docker.name] == "/rancher-server" {
            json {
               source => "message"
            }

            kv {}

            if [@message] {
               mutate {
                 replace => { "message" => "%{@message}" }
               }
            }
        }
      outputs: |
        elasticsearch {
          hosts => "elasticsearch"
          index => "logstash-%{+YYYY.MM.dd}"
        }
logstash-collector:
  metadata:
    logstash:
      inputs: |
        udp {
          port => 5000
          codec => "json"
        }
      outputs: |
        redis {
          host => "redis"
          port => "6379"
          data_type => "list"
          key => "logstash"
        }

Let me know if this worked for you.