How do I back up the database for Rancher?

Yes, and using -v to make it a volume is a good way to ensure you donā€™t accidentally lose it. As far as backing it up, Iā€™m not sure what guarantees MySQL makes about using a copy of their data folder made while it was running. So it would probably be safer to do something like

docker exec -it rancher-server-container-id mysqldump -A

or

docker run --link rancher-server-container-id:db mysql mysqldump -A -h 'db' -u cattle -pcattle

To get a proper atomic dump of the database content.

4 Likes

This definitely should be part of the docs. Having the ability to attach to the /var/lib/sql from outside as a volume - i.e. knowing that is where it is and that there is a VOLUME there (I just read the Dockerfile, so now I see) - is valuable to anyone managing the installation.

But equally knowing how to run ā€œproperā€ backups, i.e. knowing that mysqldump would work here is important.

Should also have restore directions.

Thanks.

2 Likes

Was having the same problem as you, just found now the documentation, you can look here ---->
http://docs.rancher.com/rancher/upgrading/

Not quite. We finally got around to trying it, and the server refuses to start. It hangs on

Running mysql_install_db...
chown: changing ownership of '/var/lib/mysql': Permission denied
Cannot change ownership of the database directories to the 'mysql'
user.  Check that you have the necessary permissions and try again.

Ah well, selinux again. :slight_smile:

yes, this selinux is a devil :smile:

So I just built a simple version of what @vincent suggested. An image that can be run and told to do backups.

docker run --link rancher-server-container-id:db rancher-backup (OK, it isnā€™t called rancher-backup because I havenā€™t pushed it out yet, and it would have a real namespace)

It takes 4 env var options:

  • DB_DUMP_FREQ: how often to run the dump (minutes), default to 1440 (1/day)
  • DB_DUMP_BEGIN: when to start that frequency, in time of day, e.g. 1330, default 2330
  • DB_DUMP_TARGET: where to dump the backups, default to /backup
  • DB_DUMP_DEBUG: to enable set -x on the bash script that does the backups for debugging

So you could map something into /backup and have it run the backups.

Each backup is saved as a file in DB_DUMP_TARGET directory as a gzipped file with the name rancher_backup_YYYYMMDDHHMMSS.gz (as in YYYY=year, etc.)

I am working on getting it to accept a DB_DUMP_TARGET of an smb URL, then http and maybe nfs.

It also supports DB_RESTORE_TARGET which, if given a file from a previous backup, will restore the database. Works really nicely as a one-time docker run command.

Any interest? Should I ask my client for permission to open-source it?

2 Likes

Is there anything Rancher-specific about it? Sounds like it could be useful as a generic mysql backup method. If open source I think weā€™d be happy to link to it in the docs.

1 Like

docker exec -it rancher-server-container-id mysqldump -A

or

docker run --link rancher-server-container-id:db mysql mysqldump -A -h ā€˜dbā€™ -u cattle -pcattle

To get a proper atomic dump of the database content.

Hi Vincent, this technique works fine for backups, but stuck on restore (glad Iā€™m testing now).

Tried this:

/usr/bin/docker exec -it cad399e67ffa mysql -pxxxxx keycloak < /home/kevin/keycloak-2015-09-24.sql
cannot enable tty mode on non tty input
Nopeā€¦
/usr/bin/docker exec -i cad399e67ffa mysql -pxxxxx keycloak < /home/kevin/keycloak-2015-09-24.sql
Warning: Using a password on the command line interface can be insecure.
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Warning: Using a password on the command line interface can be insecure.

/ā€™ at line 1
Nope againā€¦

Probably easy if I knew how.

Bah, forget about it. The dump leaves a warning line about passwords on the command line in the top of the sql dump. Delete that and itā€™s fine.

It probably could, now that I think about it. The only rancher-specific things are:

  • link to rancher db container - but that is defined in --link line anyways, so not really rancher-specific
  • username and password - but that also can be overridden if I make a minor tweak

Iā€™ll go get signoff.

On my phone so I canā€™t test it, but 2>/dev/null should hide the password warning, or even better >mydump.sql to write the dump to disk and leave errors to stderr.

OK, they just signed off. I will get this uploaded to github and the docker hub in the coming days.

Heh, I got it today. See https://github.com/deitch/mysql-backup and the image at https://hub.docker.com/r/deitch/mysql-backup/

Issues and PRs welcome at the github repo.

3 Likes

The best way is docker exec -it 711830bda328 mysqldump -A > dump.sql

Rancher backup ist in the catalog and helps to backup databases and stacks.

When i try to restore a mysqldump i get
ERROR 2013 (HY000) at line 1418: Lost connection to MySQL server during query
any ideas?

Same here. Is it necessary to have active connection to hosts during restore?
Log from rancher server container

FATAL: Exiting due to failed cluster check-in
2018-06-01 10:30:18,909 ERROR   [pool-3-thread-1] [ConsoleStatus] Check-in failed java.lang.IllegalStateException: Failed to update check-in, registration deleted
at io.cattle.platform.hazelcast.membership.dao.impl.ClusterMembershipDAOImpl.checkin(ClusterMembershipDAOImpl.java:35) ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.checkin(DBDiscovery.java:174) ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.doRun(DBDiscovery.java:163) ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext(NoExceptionRunnable.java:15) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:108) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_72]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_72]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting rancher-compose-executor" version=v0.14.18
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting go-machine-service" service=gms

Rancher server figures out who is in its HA cluster and who is the leader by reading from/writing to the DB pretty often. I think it is every 15 seconds. To avoid split brain problems, if that read/write fails, rancher server kills itself. A restore (and indeed certain ways of backing up) lock the db long enough for this call to fail. I believe that is what you are experiencing.

Our topology isnā€™t HA installation. We have single rancher server with internal DB on mounted volume.
Hosts I was referring are worker hosts for stacks. My assumption was that in case of DB failure, clean rancher server will accept SQL dump made before. I used deitchā€™s container for dump and restore.