How do I back up the database for Rancher?

Same here. Is it necessary to have active connection to hosts during restore?
Log from rancher server container

FATAL: Exiting due to failed cluster check-in
2018-06-01 10:30:18,909 ERROR   [pool-3-thread-1] [ConsoleStatus] Check-in failed java.lang.IllegalStateException: Failed to update check-in, registration deleted
at io.cattle.platform.hazelcast.membership.dao.impl.ClusterMembershipDAOImpl.checkin( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.checkin( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at io.cattle.platform.hazelcast.membership.DBDiscovery.doRun( ~[cattle-hazelcast-common-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.NoExceptionRunnable.runInContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.ManagedContextRunnable$ [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$ [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext( [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at [cattle-framework-managed-context-0.5.0-SNAPSHOT.jar:na]
at java.util.concurrent.Executors$ [na:1.8.0_72]
at java.util.concurrent.FutureTask.runAndReset( [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301( [na:1.8.0_72]
at java.util.concurrent.ScheduledThreadPoolExecutor$ [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [na:1.8.0_72]
at java.util.concurrent.ThreadPoolExecutor$ [na:1.8.0_72]
at [na:1.8.0_72]
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting rancher-compose-executor" version=v0.14.18
2018/06/01 10:30:19 http: proxy error: EOF
time="2018-06-01T10:30:19Z" level=info msg="Exiting go-machine-service" service=gms

Rancher server figures out who is in its HA cluster and who is the leader by reading from/writing to the DB pretty often. I think it is every 15 seconds. To avoid split brain problems, if that read/write fails, rancher server kills itself. A restore (and indeed certain ways of backing up) lock the db long enough for this call to fail. I believe that is what you are experiencing.

Our topology isn’t HA installation. We have single rancher server with internal DB on mounted volume.
Hosts I was referring are worker hosts for stacks. My assumption was that in case of DB failure, clean rancher server will accept SQL dump made before. I used deitch’s container for dump and restore.

Whether you have an HA installation or not, the cluster logic still runs. It is basically always on.
It is not necessary to have an active connection to the worker hosts during a restore.
Your assumption is correct: if you create a new DB from a previous dump and then point a new rancher-server container at it, that will work (assuming the old and new rancher server container have the same FQDN).

And does it work the same, if I don’t use separate container for DB, but internal DB directly in rancher?

We do not recommend the internal db for production. But what part are specifically asking works the same?

I was asking about restoring rancher server running in single container with internal DB and bind mount MYSQL volume on host. If it is possible to run clean rancher/server container and after its inicialization do actual restore.

Is it sufficient to have separate MYSQL container fo rancher, or we should move our DB somewhere else to cloud?

Rancher is not special in this case. You should just follow generaly sql db best practices.

A single mysql container would not be HA and would be a fairly dangerous way to run. You’d be safer running mysql somewhere with a true HA setup. Many of our users utilize AWS RDS for their DB.