G’day. I’ve been unable to get Cluster Logging working using Elasticsearch as a target. The Rancher UI accepts the endpoint configuration etc but I’m not seeing any indices created in our Elasticsearch cluster. The Rancher cluster logging UI states:
'We will collect the standard output and standard error for each container, the log files which under path /var/log/containers/ on each host. ’
… but /var/log/containers is empty on each of my k8s nodes. I’m using CentOS as the underlying OS and I believe k8s uses journald as the logging driver by the default on systemd OS’s. Is this the reason I’m not seeing logging data in Elasticsearch? Has anyone here successfully set up cluster logging with CentOS as the underlying OS?
I’m also trying to figure out how to configure logging in rancher using an external elasticsearch.
Configuration seems good, but where do we set the cluster name?
In any case I confirm that nothing get sent to elastic.
Any way to know how to debug this part?
Any more documentation on this topic?
I have files in the stated folder so …
Thanks
Luca
Hi gioppoluca, it sounds like we’re not experiencing the same issue. You can configure the Elasticsearch endpoint in the Rancher UI at Cluster > Tools > Logging.
Rancher 2.0 uses a fluentd daemonset for log aggregation under the hood. You can expose the config map, and deployment etc in the UI by moving the cattle-logging namespace into a project. You can use the following to view pertinent logs (replace ‘fluentd-blah’ with the name of the pods in your environment): kubectl logs fluentd-blah fluentd -n cattle-logging
I’m going to play around with the cluster-logging config map for fluentd to see if I can configure it to work with journald instead of tailing /var/log/containers/*.log.
It would be great to get thoughts from others or the folks at Rancher
I discovered that I had my node not properly cleaned (I’m using 3 hosts as testing environment and I’m destroying the Rancher environment a lot) and this was the reason that things were not working.
Now after having cleaned the host as for documentation I recreated a cluster, set the elasticsearch resource and now things get pushed in the elastic server.
Thanks
Thanks for the reply. That’s interesting, I’m running CentOS7 and my Docker logs are written to journald instead of log files.
Can you please let me know which versions of Rancher, Kubernetes and Docker you’re using using? I’m on the following:
Rancher: 2.0.5
Kubernetes: 1.10.1-rancher2.1
Docker: 1.13.1
Also, did you have to change any docker/kubernetes configuration to prevent logging to journald?
It would be helpful to see the output of the following command on your nodes:
docker info | grep Log
Here is mine:
Logging Driver: journald
I guess I can change the default logging driver to ‘json-file’.
The stuff works, but I would like to know how to skip the writing in the json-files.
Since docker has the fluentd plugin it would be interesting to understand why we need to set the docker daemon to write the files and have a fluentd process that read the file and send to elasticsearch.
Wouldnt be better to have to set the docker daemon to use the fluentd plugin and do follow all this writing to disk?
The problem is how to configure the fluentd
Changing docker’s log driver from journald to json-file resolved my issue.
Just a heads up for other folks in the same situation, the current docker package in CentOS’s extras repository configures the log driver in /etc/sysconfig/docker. So configuring it in /etc/docker/daemon.json won’t work. You’ll just need to remove ‘–log-driver =journald’ from /etc/sysconfig/docker and restart docker.service.
So Im also having a similar issue as you did. Despite not being able to send logs to elasticsearch I realized that I cant send log files to my syslog server. So your fix that you provided I’m hoping to try. I’d like to get some more clarification about it. So you removed the --log-driver = journald and that was it? You didnt replace it with JSON or anything to that effect? Thanks in advance.
I’d suggest running: docker info | grep 'Logging Driver'
and confirm that your docker daemon is using journald logging driver. If it is, then you need to configure it to use the json-file logging driver. The way to do this will depend on the distribution and how you installed docker engine on your nodes.
If you’re using Red Hat or CentOS and installed docker from standard repositories then yes, you just remove --log-driver=journald from /etc/sysconfig/docker and restart docker.service. This is because docker logs to json-file by default.
1st of all wanted to thank you for the solution you provided. You helped me nail down my problem EXACTLY! Yes I am running on RHEL using stock docker and also am running the journald driver. So I removed it from my /etc/sysconfig/docker on all 3 of my nodes. I am running a 3 node k8s management cluster. I restarted docker on each node 1 by time to ensure no major downtime. Pointing the logging to one of my test rsyslog servers and there came the flood of information! So now I tested sending my stuff to Elasticsearch and even though I think I had to create the index manually (no big deal). It did start feeding my index… FINALLY! Thank you so much. I’ve been banging my head on this for almost 2 weeks now.