Rancher server restart now and then

I run rancher 2.1.5 with docker in kvm, with 4vCore 6G vRAM. The server stable for 2-3 weeks, and it crash and restart now and then.

the log says:

2019/02/25 19:08:10 [INFO] Handling backend connection request [c-f9l2p:m-67997d73d22f]
2019/02/25 19:08:11 [INFO] Handling backend connection request [c-f9l2p:m-5fa92dd197ec]
2019-02-25 19:08:18.427513 W | etcdserver: apply entries took too long [630.597633ms for 1 entries]
2019-02-25 19:08:18.427699 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:08:20 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.199:38116: i/o timeout
2019-02-25 19:08:22.547809 W | wal: sync duration of 1.358299648s, expected less than 1s
2019/02/25 19:08:22 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.143:59082: i/o timeout
2019-02-25 19:08:22.579969 W | etcdserver: apply entries took too long [2.089079233s for 1 entries]
2019-02-25 19:08:22.580144 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:22.711249 W | etcdserver: apply entries took too long [130.950157ms for 1 entries]
2019-02-25 19:08:22.711465 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:08:26 [INFO] Handling backend connection request [c-f9l2p:m-9a22cfdcac66]
2019-02-25 19:08:27.940241 W | wal: sync duration of 1.054464428s, expected less than 1s
2019-02-25 19:08:28.134082 W | etcdserver: apply entries took too long [191.32317ms for 1 entries]
2019-02-25 19:08:28.134147 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:08:33 [INFO] Handling backend connection request [c-f9l2p:m-d7d982fe2d91]
2019/02/25 19:08:37 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.66:36602: i/o timeout
2019-02-25 19:08:37.327126 W | etcdserver: apply entries took too long [252.037016ms for 1 entries]
2019-02-25 19:08:37.327303 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:38.802692 W | etcdserver: apply entries took too long [399.769267ms for 1 entries]
2019-02-25 19:08:38.802847 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:41.029086 W | etcdserver: apply entries took too long [315.175655ms for 1 entries]
2019-02-25 19:08:41.029557 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:42.672506 W | etcdserver: apply entries took too long [212.230155ms for 1 entries]
2019-02-25 19:08:42.672758 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:08:44 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.142:52874: i/o timeout
2019-02-25 19:08:46.986437 W | etcdserver: apply entries took too long [126.021158ms for 1 entries]
2019-02-25 19:08:46.986512 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:50.560211 W | etcdserver: apply entries took too long [127.189272ms for 1 entries]
2019-02-25 19:08:50.561066 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:08:51 [INFO] Handling backend connection request [c-f9l2p:m-67997d73d22f]
2019/02/25 19:08:52 [INFO] Handling backend connection request [c-f9l2p:m-5fa92dd197ec]
2019-02-25 19:08:52.212930 W | etcdserver: apply entries took too long [178.314907ms for 1 entries]
2019-02-25 19:08:52.213071 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:08:57.477218 W | wal: sync duration of 1.195107389s, expected less than 1s
2019-02-25 19:08:59.872834 W | etcdserver: apply entries took too long [947.566026ms for 1 entries]
2019-02-25 19:08:59.872972 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:00.048744 W | etcdserver: apply entries took too long [175.631348ms for 1 entries]
2019-02-25 19:09:00.048845 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:02 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.199:38310: i/o timeout
2019/02/25 19:09:02 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.143:59210: i/o timeout
2019-02-25 19:09:08.570707 W | etcdserver: apply entries took too long [209.532514ms for 1 entries]
2019-02-25 19:09:08.570763 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:12.763320 W | etcdserver: apply entries took too long [398.407956ms for 1 entries]
2019-02-25 19:09:12.763569 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:14.309694 W | etcdserver: apply entries took too long [111.922852ms for 1 entries]
2019-02-25 19:09:14.309768 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:15 [INFO] Handling backend connection request [c-f9l2p:m-9a22cfdcac66]
2019-02-25 19:09:16.638303 W | etcdserver: apply entries took too long [282.906026ms for 1 entries]
2019-02-25 19:09:16.638671 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:19 [INFO] Handling backend connection request [c-f9l2p:m-d7d982fe2d91]
2019-02-25 19:09:19.994530 W | etcdserver: apply entries took too long [185.484417ms for 1 entries]
2019-02-25 19:09:19.994722 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:20.657031 W | etcdserver: apply entries took too long [159.544441ms for 1 entries]
2019-02-25 19:09:20.657088 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:25 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.66:36720: i/o timeout
2019-02-25 19:09:26.227720 W | etcdserver: apply entries took too long [274.334471ms for 1 entries]
2019-02-25 19:09:26.227983 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:26.835062 W | etcdserver: apply entries took too long [313.156839ms for 1 entries]
2019-02-25 19:09:26.835122 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:29 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.142:52980: i/o timeout
2019-02-25 19:09:31.049857 W | wal: sync duration of 1.085391782s, expected less than 1s
2019-02-25 19:09:32.029294 W | etcdserver: apply entries took too long [109.370216ms for 1 entries]
2019-02-25 19:09:32.029371 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:34 [INFO] Handling backend connection request [c-f9l2p:m-5fa92dd197ec]
2019-02-25 19:09:36.187480 W | etcdserver: apply entries took too long [387.789724ms for 1 entries]
2019-02-25 19:09:36.187673 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:40 [INFO] Handling backend connection request [c-f9l2p:m-67997d73d22f]
2019-02-25 19:09:41.869120 W | wal: sync duration of 1.347826421s, expected less than 1s
2019-02-25 19:09:42.576026 W | etcdserver: apply entries took too long [644.85894ms for 1 entries]
2019-02-25 19:09:42.576146 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:09:44 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.143:59360: i/o timeout
2019-02-25 19:09:48.115491 W | wal: sync duration of 1.727403764s, expected less than 1s
2019-02-25 19:09:48.461646 W | etcdserver: apply entries took too long [102.249333ms for 1 entries]
2019-02-25 19:09:48.461699 W | etcdserver: avoid queries with large range/delete range!
2019-02-25 19:09:49.847050 I | mvcc: store.index: compact 4134012
2019-02-25 19:09:49.901448 W | wal: sync duration of 1.021751175s, expected less than 1s
2019/02/25 19:09:50 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.199:38554: i/o timeout
2019-02-25 19:09:58.522805 W | wal: sync duration of 1.328223901s, expected less than 1s
2019/02/25 19:09:59 [INFO] Handling backend connection request [c-f9l2p:m-9a22cfdcac66]
2019-02-25 19:09:59.095900 W | etcdserver: apply entries took too long [10.432206463s for 1 entries]
2019-02-25 19:09:59.096274 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:10:07 [INFO] Handling backend connection request [c-f9l2p:m-d7d982fe2d91]
2019/02/25 19:10:09 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.66:36826: i/o timeout
2019/02/25 19:10:17 [INFO] Handling backend connection request [c-f9l2p:m-5fa92dd197ec]
E0225 19:10:17.374572 7 status.go:64] apiserver received an error that is not an metav1.Status: etcdserver: request timed out
2019/02/25 19:10:17 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.142:53080: i/o timeout
E0225 19:10:17.884142 7 leaderelection.go:258] Failed to update lock: etcdserver: request timed out
2019/02/25 19:10:25 [INFO] Handling backend connection request [c-f9l2p:m-67997d73d22f]
2019-02-25 19:10:26.100523 W | etcdserver: apply entries took too long [23.897744428s for 1 entries]
2019-02-25 19:10:26.100606 W | etcdserver: avoid queries with large range/delete range!
2019/02/25 19:10:27 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.143:59520: i/o timeout
2019/02/25 19:10:35 [INFO] error in remotedialer server [400]: read tcp 172.17.0.2:443->10.168.122.199:38788: i/o timeout
E0225 19:10:41.777858 7 status.go:64] apiserver received an error that is not an metav1.Status: etcdserver: request timed out
E0225 19:10:41.804332 7 status.go:64] apiserver received an error that is not an metav1.Status: etcdserver: request timed out
E0225 19:10:41.853631 7 leaderelection.go:258] Failed to update lock: etcdserver: request timed out
E0225 19:10:41.879354 7 leaderelection.go:258] Failed to update lock: etcdserver: request timed out
E0225 19:10:41.883213 7 event.go:260] Could not construct reference to: ‘&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}’ due to: ‘selfLink was empty, can’t make reference’. Will not report event: ‘Normal’ ‘LeaderElection’ ‘5ade08654f46_cd9f2be5-37c7-11e9-b234-0242ac110002 stopped leading’
F0225 19:10:42.116104 7 controllermanager.go:205] leaderelection lost
goroutine 4727 [running]:
github.com/rancher/rancher/vendor/github.com/golang/glog.stacks(0xc006fc0000, 0xc004687340, 0x4c, 0xde)
/go/src/github.com/rancher/rancher/vendor/github.com/golang/glog/glog.go:766 +0xd4
github.com/rancher/rancher/vendor/github.com/golang/glog.(*loggingT).output(0xbab1b20, 0xc000000003, 0xc039960790, 0xb712b3b, 0x14, 0xcd, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/golang/glog/glog.go:717 +0x306
github.com/rancher/rancher/vendor/github.com/golang/glog.(*loggingT).printf(0xbab1b20, 0x3, 0x501187f, 0x13, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/golang/glog/glog.go:655 +0x14b
github.com/rancher/rancher/vendor/github.com/golang/glog.Fatalf(0x501187f, 0x13, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/golang/glog/glog.go:1145 +0x67
github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run.func2()
/go/src/github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:205 +0x47
github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc01602a0c0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:141 +0x40
github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc01602a0c0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:148 +0xaf
github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x81414c0, 0xc016f51d40, 0x37e11d600, 0x2540be400, 0x77359400, 0xc0167149d0, 0x51daa38, 0x0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:157 +0x70
github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.Run(0xc0158027e0, 0xc0158027e0, 0x0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:197 +0x697
github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app.NewControllerManagerCommand.func1(0xc012a36280, 0xbad3568, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/kubernetes/cmd/kube-controller-manager/app/controllermanager.go:92 +0x25a
github.com/rancher/rancher/vendor/github.com/spf13/cobra.(*Command).execute(0xc012a36280, 0xc00fc784e0, 0x0, 0x19, 0xc012a36280, 0xc00fc784e0)
/go/src/github.com/rancher/rancher/vendor/github.com/spf13/cobra/command.go:757 +0x2cc
github.com/rancher/rancher/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc012a36280, 0x20666f20676e696e, 0x2e79726f74736968, 0x7073206e65685720)
/go/src/github.com/rancher/rancher/vendor/github.com/spf13/cobra/command.go:843 +0x2fd
github.com/rancher/rancher/vendor/github.com/spf13/cobra.(*Command).Execute(0xc012a36280, 0x6874202c7465736e, 0x7220656874206e65)
/go/src/github.com/rancher/rancher/vendor/github.com/spf13/cobra/command.go:791 +0x2b
github.com/rancher/rancher/pkg/hyperkube.NewKubeControllerManager.func1(0xc016edf980, 0xc00fc784e0, 0x0, 0x19, 0xc00007a600, 0x3b67616c66206461, 0x7469206669202d20)
/go/src/github.com/rancher/rancher/pkg/hyperkube/kube-controller-manager.go:44 +0x57
github.com/rancher/rancher/pkg/hyperkube.(*HyperKube).Run.func1(0xc018fb2f60, 0xc016edf980)
/go/src/github.com/rancher/rancher/pkg/hyperkube/hyperkube.go:184 +0x76
created by github.com/rancher/rancher/pkg/hyperkube.(*HyperKube).Run
/go/src/github.com/rancher/rancher/pkg/hyperkube/hyperkube.go:183 +0x4e6
2019-02-25 19:10:48.700855 W | wal: sync duration of 1.184311808s, expected less than 1s
2019/02/25 19:11:38 [INFO] Rancher version v2.1.5 is starting
2019/02/25 19:11:38 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig: AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2019/02/25 19:11:38 [INFO] Listening on /tmp/log.sock
2019/02/25 19:11:38 [INFO] Running etcd --peer-client-cert-auth --client-cert-auth --listen-client-urls=https://0.0.0.0:2379 --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --advertise-client-urls=https://127.0.0.1:2379,https://127.0.0.1:4001 --initial-cluster=etcd-master=https://127.0.0.1:2380 --initial-cluster-state=new --name=etcd-master --initial-advertise-peer-urls=https://127.0.0.1:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --election-timeout=5000 --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://0.0.0.0:2380
2019-02-25 19:11:38.679090 I | etcdmain: etcd Version: 3.2.13
2019-02-25 19:11:38.680461 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2019-02-25 19:11:38.680595 I | etcdmain: Go Version: go1.11
2019-02-25 19:11:38.680683 I | etcdmain: Go OS/Arch: linux/amd64
2019-02-25 19:11:38.680759 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-02-25 19:11:38.691405 N | etcdmain: the server is already initialized as member before, starting as etcd member…
2019-02-25 19:11:38.697724 I | embed: peerTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-02-25 19:11:38.729623 I | embed: listening for peers on https://0.0.0.0:2380
2019-02-25 19:11:38.735586 I | embed: listening for client requests on 0.0.0.0:2379
2019-02-25 19:11:38.889742 I | etcdserver: recovered store from snapshot at index 4800048
2019-02-25 19:11:38.928058 I | mvcc: restore compact to 4133654
2019-02-25 19:11:58.755992 I | mvcc: store.index: compact 4134012
2019-02-25 19:11:59.514690 I | mvcc: resume scheduled compaction at 4134012
2019-02-25 19:12:00.504382 I | etcdserver: name = etcd-master
2019-02-25 19:12:00.504647 I | etcdserver: data dir = /var/lib/rancher/etcd/
2019-02-25 19:12:00.504679 I | etcdserver: member dir = /var/lib/rancher/etcd/member
2019-02-25 19:12:00.508738 I | etcdserver: heartbeat = 500ms
2019-02-25 19:12:00.508858 I | etcdserver: election = 5000ms
2019-02-25 19:12:00.508961 I | etcdserver: snapshot count = 100000
2019-02-25 19:12:00.509083 I | etcdserver: advertise client URLs = https://127.0.0.1:2379,https://127.0.0.1:4001
2019-02-25 19:14:01.921904 I | etcdserver: restarting member e92d66acd89ecf29 in cluster 7581d6eb2d25405b at commit index 4892852
2019-02-25 19:14:03.313841 I | raft: e92d66acd89ecf29 became follower at term 15
2019-02-25 19:14:03.315637 I | raft: newRaft e92d66acd89ecf29 [peers: [e92d66acd89ecf29], term: 15, commit: 4892852, applied: 4800048, lastindex: 4892852, lastterm: 15]
2019-02-25 19:14:03.334911 I | etcdserver/api: enabled capabilities for version 3.2
2019-02-25 19:14:03.335098 I | etcdserver/membership: added member e92d66acd89ecf29 [https://127.0.0.1:2380] to cluster 7581d6eb2d25405b from store
2019-02-25 19:14:03.335220 I | etcdserver/membership: set the cluster version to 3.2 from store
2019-02-25 19:14:03.456708 I | mvcc: restore compact to 4133654
2019-02-25 19:14:13.326494 I | mvcc: store.index: compact 4134012
2019-02-25 19:14:13.347793 I | mvcc: resume scheduled compaction at 4134012
2019-02-25 19:14:13.347924 W | auth: simple token is not cryptographically signed
2019-02-25 19:14:13.407672 I | etcdserver: starting server… [version: 3.2.13, cluster version: 3.2]
2019-02-25 19:14:13.490824 I | embed: ClientTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-02-25 19:14:13.744955 I | mvcc: finished scheduled compaction at 4134012 (took 396.8678ms)
2019-02-25 19:14:17.858271 I | raft: e92d66acd89ecf29 is starting a new election at term 15
2019-02-25 19:14:17.895687 I | raft: e92d66acd89ecf29 became candidate at term 16
2019-02-25 19:14:17.896027 I | raft: e92d66acd89ecf29 received MsgVoteResp from e92d66acd89ecf29 at term 16
2019-02-25 19:14:17.897097 I | raft: e92d66acd89ecf29 became leader at term 16
2019-02-25 19:14:17.897330 I | raft: raft.node: e92d66acd89ecf29 elected leader e92d66acd89ecf29 at term 16
2019-02-25 19:14:28.869949 E | etcdserver: publish error: etcdserver: request timed out, possibly due to previous leader failure
2019-02-25 19:14:29.139145 I | embed: ready to serve client requests
2019-02-25 19:14:29.139734 I | etcdserver: published {Name:etcd-master ClientURLs:[https://127.0.0.1:2379 https://127.0.0.1:4001]} to cluster 7581d6eb2d25405b
2019-02-25 19:14:29.182288 I | embed: serving client requests on [::]:2379
2019/02/25 19:14:29 [INFO] Running kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file= --proxy-client-cert-file= --bind-address=127.0.0.1 --secure-port=6443 --requestheader-allowed-names= --service-node-port-range=30000-32767 --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --authorization-mode=Node,RBAC --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --etcd-servers=https://127.0.0.1:2379 --insecure-bind-address=127.0.0.1 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --allow-privileged=true --requestheader-username-headers= --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --endpoint-reconciler-type=lease --requestheader-group-headers= --advertise-address=10.43.0.1 --insecure-port=0 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix= --etcd-prefix=/registry --storage-backend=etcd3 --cloud-provider= --proxy-client-key-file= --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota -v=1 --logtostderr=false --alsologtostderr=false
2019/02/25 19:14:29 [INFO] Activating driver gke
2019/02/25 19:14:29 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version: dial tcp 127.0.0.1:6443: connect: connection refused
2019/02/25 19:14:29 [INFO] Activating driver gke done
2019/02/25 19:14:29 [INFO] Activating driver aks
2019/02/25 19:14:29 [INFO] Activating driver aks done
2019/02/25 19:14:29 [INFO] Activating driver eks
2019/02/25 19:14:29 [INFO] Activating driver eks done
2019/02/25 19:14:29 [INFO] Activating driver import
2019/02/25 19:14:29 [INFO] Activating driver import done
2019/02/25 19:14:29 [INFO] Activating driver rke
2019/02/25 19:14:29 [INFO] Activating driver rke done

How stable we should expect Rancher server to run. Do I limit on memory at my setup?