Trouble running logstash from catalog

After watching the demo on using ELK with logspout, I decided to try it using the catalog feature to set this up. Elasticsearch started up fine, but logstash stuck. The logstash_logstash-collector_1 container as been toggling between starting and activating for 20 minutes. This the third time I’ve tried.

After reading the docs on how to use the logstash catalog (kidding)…really I cloned the template repo and looked at the docker-compose.yml and from that guessed that I needed to set the Elasticsearch stack/service field to “elasticsearch-clients”

Are these catalogs suppose to work out of the box like I’m attempting or are they only for reference?

And, for Logspout route for logs, I’m assuming that this will work

  • logstash://ip_of_logstash_collector:5000

But again, I’m not certain.

Also if I’m not sure what machine the logstash container will be running on, shouldn’t I just put a load balancer in front it? For the Elasticsearch container I"m supposed to be connecting to. It just seems like these things could move around if they fail for some reason, or their host dies.

Sorry for all of the questions, but I didn’t find any documentation on how the catalogs should be
used.

Yeah the documentation is lacking. They are intended to work out of the box, and run on rancher.

The elasticsearch stack/service question needs you to point to the Elasticsearch Client service. The drop down should show you the stack name and then elasticsearch-clients is one of the services. Its the only one you can send data to.

the logstash://logstash:5000 in the catalog for logspout works. As long as you link it to that end point.

Under the hood, Rancher is handling DNS through the Links so you do not need to know the IPs of anything running on Rancher (as long as they are linked)

The last step will be Kibana which also needs to link to elasticsearch-clients.

Thanks. I’m close now…it seems, but the nginx in front of Kibana is strugging, so I can’t get to Kibana. Any thoughts on what might be the problem?

I have es, logstash, and a logspout all running and apparently connected.

The logs:
12/18/2015 9:39:18 AM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:50,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2015-12-18T15:39:18.739Z”,“v”:0}
12/18/2015 9:39:18 AM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:60,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2015-12-18T15:39:18.741Z”,“v”:0}
12/18/2015 9:39:54 AM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:50,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2015-12-18T15:39:54.141Z”,“v”:0}

Still can’t access Kibana…
Now I get a bunch o fhte following messates, then the kibana_nginx-proxy_kibana4_1 service restarts.

1/11/2016 3:39:46 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:50,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:39:46.302Z”,“v”:0}
1/11/2016 3:39:46 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:60,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:39:46.304Z”,“v”:0}
1/11/2016 3:39:55 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:50,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:39:55.267Z”,“v”:0}
1/11/2016 3:39:55 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:60,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:39:55.271Z”,“v”:0}1/11/2016 3:40:04 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:50,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:40:04.232Z”,“v”:0}
1/11/2016 3:40:04 PM{“name”:“Kibana”,“hostname”:“kibana_nginx-proxy_1”,“pid”:1,“level”:60,“err”:{“message”:“Request Timeout after 5000ms”,“name”:“Error”,“stack”:“Error: Request Timeout after 5000ms\n at null. (/opt/kibana/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n at Timer.listOnTimeout [as ontimeout] (timers.js:112:15)”},“msg”:"",“time”:“2016-01-11T21:40:04.237Z”,“v”:0}

I’m also seeing timouts from logstash trying to connect to Redis. Redis has no errors in it’s log, and here’s the access rule for all of the nodes for port 6379: ALLOW IPv4 6379/tcp from 0.0.0.0/0