Ship Kubernetes logs to Logstash

Hello,

would like to know how others solved the question. I have a working solution, but generally interested how you did it.

Here is the result of a day spent reading:

  • The Kubernetes ecosystem works with Fluentd mostly (so did Openshift in my previous project)
  • Fluentd has nice container metadata https://github.com/fluent/fluentd-kubernetes-daemonset
  • Fluentd can’t talk to logstash BUMMER
  • Filebeat talks to Logstash, Kubernetes is not their main usecase, they don’t have nice defaults like the above linked Fluentd daemonset.
  • But the only thing the Fluentd daemonset adds as metadata is coming from a plugin, that is essentially slicing up log file names.
  • Luckily someone did it already: github.com/ApsOps/filebeat-kubernetes

So as of now, I use https://github.com/ApsOps/filebeat-kubernetes on my testing cluster to ship logs to Logstash.

Looking forward to hear other takes.

Take care.
Laszlo

We are using the official filebeat and metricbeat deployment/daemonset configs with some tweaks.

Also using filebeat hint based autodiscover to properly deal with multiline logs.

Filebeat daemonset pods send logs to logstash, metricbeat pods send data directly to kibana/elasticsearch.

We’re using Elastalert for alerting, currently just to Slack

So far it’s working very well. But that’s really only just now been the case. Metricbeat had issues with changes made to kube-state-metrics that weren’t fixed until Metricbeat 6.4.0, but now that those issues have been fixed, it’s pretty golden.

–Alex