Centralised Logging Strategies

I wanted to get some ideas on what different approaches people are taking to logging in production.

Our docker containers are set up to log to stdout/stderr. My plan is to set --log-driver on every docker daemon on every host I deploy, and pump those logs to a centralised location.

For the log server I am considering options of either deploying our own ELK stack, or using a hosted one (logit, qbox), or possibly some other hosted options (loggly, logentries, papertrail).

I have not had experience using these, but I am assuming that the choice between the drivers (syslog / journald, gelf, fluentd, awslogs, splunk), is going to be largely driven by what log server I decide to go with, and use the one which is going to be easiest to integrate.

As an aside, I know it’s a small thing, but is there any way to get Rancher to set --log-driver to the same value for each one of my hosts, instead of doing this manually? So this is on the docker daemon lever, as opposed to setting it up per container. (I saw that this was recently added to container config in the UI, which is awesome!)

-Marcin

Some if it was covered there: Monitoring - How are other Rancher users doing it