How do I back up the database for Rancher?

Yes, and using -v to make it a volume is a good way to ensure you don’t accidentally lose it. As far as backing it up, I’m not sure what guarantees MySQL makes about using a copy of their data folder made while it was running. So it would probably be safer to do something like

docker exec -it rancher-server-container-id mysqldump -A


docker run --link rancher-server-container-id:db mysql mysqldump -A -h 'db' -u cattle -pcattle

To get a proper atomic dump of the database content.


This definitely should be part of the docs. Having the ability to attach to the /var/lib/sql from outside as a volume - i.e. knowing that is where it is and that there is a VOLUME there (I just read the Dockerfile, so now I see) - is valuable to anyone managing the installation.

But equally knowing how to run “proper” backups, i.e. knowing that mysqldump would work here is important.

Should also have restore directions.



Was having the same problem as you, just found now the documentation, you can look here ---->

Not quite. We finally got around to trying it, and the server refuses to start. It hangs on

Running mysql_install_db...
chown: changing ownership of '/var/lib/mysql': Permission denied
Cannot change ownership of the database directories to the 'mysql'
user.  Check that you have the necessary permissions and try again.

Ah well, selinux again. :slight_smile:

yes, this selinux is a devil :smile:

So I just built a simple version of what @vincent suggested. An image that can be run and told to do backups.

docker run --link rancher-server-container-id:db rancher-backup (OK, it isn’t called rancher-backup because I haven’t pushed it out yet, and it would have a real namespace)

It takes 4 env var options:

  • DB_DUMP_FREQ: how often to run the dump (minutes), default to 1440 (1/day)
  • DB_DUMP_BEGIN: when to start that frequency, in time of day, e.g. 1330, default 2330
  • DB_DUMP_TARGET: where to dump the backups, default to /backup
  • DB_DUMP_DEBUG: to enable set -x on the bash script that does the backups for debugging

So you could map something into /backup and have it run the backups.

Each backup is saved as a file in DB_DUMP_TARGET directory as a gzipped file with the name rancher_backup_YYYYMMDDHHMMSS.gz (as in YYYY=year, etc.)

I am working on getting it to accept a DB_DUMP_TARGET of an smb URL, then http and maybe nfs.

It also supports DB_RESTORE_TARGET which, if given a file from a previous backup, will restore the database. Works really nicely as a one-time docker run command.

Any interest? Should I ask my client for permission to open-source it?


Is there anything Rancher-specific about it? Sounds like it could be useful as a generic mysql backup method. If open source I think we’d be happy to link to it in the docs.

1 Like

docker exec -it rancher-server-container-id mysqldump -A


docker run --link rancher-server-container-id:db mysql mysqldump -A -h ‘db’ -u cattle -pcattle

To get a proper atomic dump of the database content.

Hi Vincent, this technique works fine for backups, but stuck on restore (glad I’m testing now).

Tried this:

/usr/bin/docker exec -it cad399e67ffa mysql -pxxxxx keycloak < /home/kevin/keycloak-2015-09-24.sql
cannot enable tty mode on non tty input
/usr/bin/docker exec -i cad399e67ffa mysql -pxxxxx keycloak < /home/kevin/keycloak-2015-09-24.sql
Warning: Using a password on the command line interface can be insecure.
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Warning: Using a password on the command line interface can be insecure.

/’ at line 1
Nope again…

Probably easy if I knew how.

Bah, forget about it. The dump leaves a warning line about passwords on the command line in the top of the sql dump. Delete that and it’s fine.

It probably could, now that I think about it. The only rancher-specific things are:

  • link to rancher db container - but that is defined in --link line anyways, so not really rancher-specific
  • username and password - but that also can be overridden if I make a minor tweak

I’ll go get signoff.

On my phone so I can’t test it, but 2>/dev/null should hide the password warning, or even better >mydump.sql to write the dump to disk and leave errors to stderr.

OK, they just signed off. I will get this uploaded to github and the docker hub in the coming days.

Heh, I got it today. See and the image at

Issues and PRs welcome at the github repo.


The best way is docker exec -it 711830bda328 mysqldump -A > dump.sql

Rancher backup ist in the catalog and helps to backup databases and stacks.

When i try to restore a mysqldump i get
ERROR 2013 (HY000) at line 1418: Lost connection to MySQL server during query
any ideas?

Same here. Is it necessary to have active connection to hosts during restore?
Log from rancher server container

FATAL: Exiting due to failed cluster check-in
2018-06-01 10:30:18,909 ERROR   [pool-3-thread-1] [ConsoleStatus] Check-in failed java.lang.IllegalStateException: Failed to update check-in, registration deleted
at io.cattle.platform.hazelcast.membership.dao.impl.ClusterMembershipDAOImpl.checkin( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.checkin( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.doRun( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.ManagedContextRunnable$ [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$ [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at java.util.concurrent.Executors$ [na:1.8.0_72]
at java.util.concurrent.FutureTask.runAndReset( [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301( [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor$ [na:1.8.0_72]
at [na:1.8.0_72]
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting rancher-compose-executor" version=v0.14.18
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting go-machine-service" service=gms

Rancher server figures out who is in its HA cluster and who is the leader by reading from/writing to the DB pretty often. I think it is every 15 seconds. To avoid split brain problems, if that read/write fails, rancher server kills itself. A restore (and indeed certain ways of backing up) lock the db long enough for this call to fail. I believe that is what you are experiencing.

Our topology isn’t HA installation. We have single rancher server with internal DB on mounted volume.
Hosts I was referring are worker hosts for stacks. My assumption was that in case of DB failure, clean rancher server will accept SQL dump made before. I used deitch’s container for dump and restore.