Cluster Logging to Elasticsearch target with systemd underlying OS

G’day. I’ve been unable to get Cluster Logging working using Elasticsearch as a target. The Rancher UI accepts the endpoint configuration etc but I’m not seeing any indices created in our Elasticsearch cluster. The Rancher cluster logging UI states:

'We will collect the standard output and standard error for each container, the log files which under path /var/log/containers/ on each host. ’

… but /var/log/containers is empty on each of my k8s nodes. I’m using CentOS as the underlying OS and I believe k8s uses journald as the logging driver by the default on systemd OS’s. Is this the reason I’m not seeing logging data in Elasticsearch? Has anyone here successfully set up cluster logging with CentOS as the underlying OS?

Cheers,
Dean

I’m also trying to figure out how to configure logging in rancher using an external elasticsearch.
Configuration seems good, but where do we set the cluster name?
In any case I confirm that nothing get sent to elastic.
Any way to know how to debug this part?
Any more documentation on this topic?
I have files in the stated folder so …
Thanks
Luca

Hi gioppoluca, it sounds like we’re not experiencing the same issue. You can configure the Elasticsearch endpoint in the Rancher UI at Cluster > Tools > Logging.

Rancher 2.0 uses a fluentd daemonset for log aggregation under the hood. You can expose the config map, and deployment etc in the UI by moving the cattle-logging namespace into a project. You can use the following to view pertinent logs (replace ‘fluentd-blah’ with the name of the pods in your environment):
kubectl logs fluentd-blah fluentd -n cattle-logging

I’m going to play around with the cluster-logging config map for fluentd to see if I can configure it to work with journald instead of tailing /var/log/containers/*.log.

It would be great to get thoughts from others or the folks at Rancher :sunglasses:

I tried adding the following new source to cluster.conf for fluentd:

<source>
  @type systemd
  pos_file /fluentd/log/journal.pos
  path /run/log/journal
  tag journal
  read_from_head true
  filters [{ "_SYSTEMD_UNIT": "docker.service" }]
  <entry>
  field_map {"_HOSTNAME": "Node"}
  fields_strip_underscores true
  </entry>
</source>

which resulted in:

[error]: config error file=“/fluentd/etc/fluent.conf” error_class=Fluent::ConfigError error=“Unknown input plugin ‘systemd’. Run ‘gem search -rd fluent-plugin’ to find plugins”

So it sounds like the rancher/fluentd:v0.1.9 image would need the systemd plugin for this to work.

I discovered that I had my node not properly cleaned (I’m using 3 hosts as testing environment and I’m destroying the Rancher environment a lot) and this was the reason that things were not working.
Now after having cleaned the host as for documentation I recreated a cluster, set the elasticsearch resource and now things get pushed in the elastic server.
Thanks

P.S.
I’m on CentOS 7

Thanks for the reply. That’s interesting, I’m running CentOS7 and my Docker logs are written to journald instead of log files.

Can you please let me know which versions of Rancher, Kubernetes and Docker you’re using using? I’m on the following:
Rancher: 2.0.5
Kubernetes: 1.10.1-rancher2.1
Docker: 1.13.1

Also, did you have to change any docker/kubernetes configuration to prevent logging to journald?

It would be helpful to see the output of the following command on your nodes:

docker info | grep Log

Here is mine:

Logging Driver: journald

I guess I can change the default logging driver to ‘json-file’.

Rancher 2.0.6

Docker version 18.03.1-ce, build 9ee9f40

My uname is
3.10.0-862.6.3.el7.x86_64

Logging Driver: json-file

No change whatsoever apart setting the proxy information since my machines are behind company proxy.
Luca

1 Like

It looks like we have our solution. I’m going to change the logging driver to json-file (and probably upgrade the docker engine) on my nodes.

I’ll post back here with results.

The stuff works, but I would like to know how to skip the writing in the json-files.
Since docker has the fluentd plugin it would be interesting to understand why we need to set the docker daemon to write the files and have a fluentd process that read the file and send to elasticsearch.
Wouldnt be better to have to set the docker daemon to use the fluentd plugin and do follow all this writing to disk?
The problem is how to configure the fluentd

Changing docker’s log driver from journald to json-file resolved my issue.

Just a heads up for other folks in the same situation, the current docker package in CentOS’s extras repository configures the log driver in /etc/sysconfig/docker. So configuring it in /etc/docker/daemon.json won’t work. You’ll just need to remove ‘–log-driver =journald’ from /etc/sysconfig/docker and restart docker.service.

Greetings Fessaries,

So Im also having a similar issue as you did. Despite not being able to send logs to elasticsearch I realized that I cant send log files to my syslog server. So your fix that you provided I’m hoping to try. I’d like to get some more clarification about it. So you removed the --log-driver = journald and that was it? You didnt replace it with JSON or anything to that effect? Thanks in advance.

g’day,

I’d suggest running:
docker info | grep 'Logging Driver'
and confirm that your docker daemon is using journald logging driver. If it is, then you need to configure it to use the json-file logging driver. The way to do this will depend on the distribution and how you installed docker engine on your nodes.

If you’re using Red Hat or CentOS and installed docker from standard repositories then yes, you just remove --log-driver=journald from /etc/sysconfig/docker and restart docker.service. This is because docker logs to json-file by default.

If you’re using another distribution or have installed docker-ce from docker’s repositories, then follow the instructions here - https://docs.docker.com/config/containers/logging/configure/

1 Like

Hello again Fessaries,

1st of all wanted to thank you for the solution you provided. You helped me nail down my problem EXACTLY! Yes I am running on RHEL using stock docker and also am running the journald driver. So I removed it from my /etc/sysconfig/docker on all 3 of my nodes. I am running a 3 node k8s management cluster. I restarted docker on each node 1 by time to ensure no major downtime. Pointing the logging to one of my test rsyslog servers and there came the flood of information! So now I tested sending my stuff to Elasticsearch and even though I think I had to create the index manually (no big deal). It did start feeding my index… FINALLY! Thank you so much. I’ve been banging my head on this for almost 2 weeks now.

No worries, mate. Glad you got it sorted. FWIW, I banged my head on it for a couple of weeks too.

Considering that Docker 1.13.1 is a Rancher 2.0 ‘supported’ Docker version, it would be nice to see better documentation around this problem.

Hello @Fessaries,

I have containetrD as runtime on my k8s cluster . I have enabled cluster and project logging but I am not able to get logs in my Elasticsearch.

From rancher UI , it validate settings but when i see in kibana no logs are there .
I see only below line in kibana dashboard.

event: Rancher logging target setting validated sourcetype: rancher_id: MB2nxypEjeQ38z _type: container_log _index: m4_demo-2020-08-13 _score: 0

I would really appreciate if you can share your inputs on this.