New cluster fails provisioning

Hello, I’m quite new to Rancher and wanted to try it out on a small cluster with 3 machines. So, I executed the Docker command on the “master” VM, then logged in to the web interface and tried to create a cluster with only that one node for the beginning. The web interface then shows “This cluster is currently Provisioning” and here it hnags forever. I took a look at the logs of the docker, these are the last few lines:

> 2020/04/01 13:53:03 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-rkd9w

2020/04/01 13:53:03 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-6bsmb with role project-owner in namespace
2020/04/01 13:53:03 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-6bsmb with role admin in namespace
2020/04/01 13:53:03 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-6bsmb with role admin in namespace
2020/04/01 13:53:03 [INFO] [mgmt-auth-prtb-controller] Updating clusterRoleBinding clusterrolebinding-kql98 for cluster membership in cluster c-rkd9w for subject user-6bsmb
I0401 13:53:05.080689 27 controller.go:606] quota admission added evaluator for: clusterregistrationtokens.management.cattle.io
2020-04-01 13:54:13.672553 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/cattle-controllers" " with result “range_response_count:1 size:397” took too long (117.425424ms) to execute
2020-04-01 13:54:13.673042 W | etcdserver: read-only range request "key:"/registry/priorityclasses" range_end:"/registry/priorityclasset" count_only:true " with result “range_response_count:0 size:7” took too long (122.350603ms) to execute
2020-04-01 13:55:08.393141 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result “range_response_count:1 size:284” took too long (113.20285ms) to execute
2020-04-01 13:55:12.187114 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/k3s" " with result “range_response_count:1 size:367” took too long (131.514872ms) to execute
2020-04-01 13:55:12.187262 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result “range_response_count:1 size:303” took too long (242.489635ms) to execute
2020-04-01 13:55:12.187375 W | etcdserver: read-only range request "key:"/registry/management.cattle.io/composeconfigs" range_end:"/registry/management.cattle.io/composeconfigt" count_only:true " with result “range_response_count:0 size:5” took too long (244.511815ms) to execute
2020-04-01 13:57:13.599533 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result “range_response_count:1 size:170” took too long (169.196635ms) to execute
2020-04-01 13:58:18.334604 W | etcdserver: read-only range request "key:"/registry/management.cattle.io/users" range_end:"/registry/management.cattle.io/usert" count_only:true " with result “range_response_count:0 size:7” took too long (109.425813ms) to execute
2020-04-01 13:58:18.337597 W | etcdserver: read-only range request "key:"/registry/limitranges" range_end:"/registry/limitranget" count_only:true " with result “range_response_count:0 size:5” took too long (208.133499ms) to execute
2020-04-01 13:58:30.660929 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result “range_response_count:1 size:286” took too long (126.95084ms) to execute
2020-04-01 13:59:03.199213 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/cloud-controller-manager" " with result “range_response_count:1 size:305” took too long (133.904489ms) to execute
2020-04-01 13:59:18.846895 W | etcdserver: read-only range request "key:"/registry/configmaps/kube-system/cattle-controllers" " with result “range_response_count:1 size:398” took too long (141.111113ms) to execute
2020-04-01 13:59:41.159106 W | etcdserver: read-only range request "key:"/registry/mutatingwebhookconfigurations" range_end:"/registry/mutatingwebhookconfigurationt" count_only:true " with result “range_response_count:0 size:5” took too long (106.288019ms) to execute

Any idea whats going wrong? Etcd seems not to be there, but I checked the radio button.

sorry for revive of topic but no one answering for those issues…
I have similar issue, when trying to create a custom cluster on docker node, on several machine, for testing purpose it’s not possible, I just get that error:


so I try to see details in the docker logs from the etcd container an get:

...
2022-01-21 10:30:21.586046 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true " with result "range_response_count:0 size:8" took too long (276.478317ms) to execute
2022-01-21 10:30:26.785514 W | etcdserver: request "header:<ID:13161627486227643740 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.121.70\" mod_revision:95992 > success:<request_put:<key:\"/registry/masterleases/172.16.121.70\" value_size:69 lease:3938255449372867930 >> failure:<request_range:<key:\"/registry/masterleases/172.16.121.70\" > >>" with result "size:18" took too long (100.303709ms) to execute
2022-01-21 10:30:33.577126 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true " with result "range_response_count:0 size:6" took too long (378.979018ms) to execute
2022-01-21 10:30:34.410083 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true " with result "range_response_count:0 size:6" took too long (158.4635ms) to execute
2022-01-21 10:30:35.710575 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:6" took too long (231.922112ms) to execute
2022-01-21 10:30:35.710640 W | etcdserver: read-only range request "key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true " with result "range_response_count:0 size:8" took too long (348.092415ms) to execute
2022-01-21 10:30:36.757283 W | etcdserver: request "header:<ID:13161627486227643779 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.121.70\" mod_revision:96003 > success:<request_put:<key:\"/registry/masterleases/172.16.121.70\" value_size:69 lease:3938255449372867969 >> failure:<request_range:<key:\"/registry/masterleases/172.16.121.70\" > >>" with result "size:18" took too long (195.075977ms) to execute
2022-01-21 10:30:43.592600 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:6" took too long (152.822598ms) to execute
2022-01-21 10:30:46.541015 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:6" took too long (128.979398ms) to execute
2022-01-21 10:30:46.541120 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:342" took too long (157.18141ms) to execute
2022-01-21 10:30:46.767565 W | etcdserver: request "header:<ID:13161627486227643821 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.121.70\" mod_revision:96014 > success:<request_put:<key:\"/registry/masterleases/172.16.121.70\" value_size:69 lease:3938255449372868011 >> failure:<request_range:<key:\"/registry/masterleases/172.16.121.70\" > >>" with result "size:18" took too long (116.747278ms) to execute
2022-01-21 10:30:52.552705 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:478" took too long (274.604839ms) to execute
2022-01-21 10:30:55.106743 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:6" took too long (123.223505ms) to execute
2022-01-21 10:30:57.724465 W | etcdserver: read-only range request "key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true " with result "range_response_count:0 size:8" took too long (137.172363ms) to execute
2022-01-21 10:31:01.639963 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true " with result "range_response_count:0 size:6" took too long (350.76584ms) to execute
2022-01-21 10:31:02.291227 W | etcdserver: request "header:<ID:13161627486227643871 > lease_revoke:<id:36a77e77269dbdab>" with result "size:28" took too long (483.046633ms) to execute
2022-01-21 10:31:04.124968 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:8" took too long (126.052218ms) to execute
2022-01-21 10:31:07.132592 W | etcdserver: request "header:<ID:13161627486227643889 username:\"system:node\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.121.70\" mod_revision:96035 > success:<request_put:<key:\"/registry/masterleases/172.16.121.70\" value_size:69 lease:3938255449372868079 >> failure:<request_range:<key:\"/registry/masterleases/172.16.121.70\" > >>" with result "size:18" took too long (210.919106ms) to execute
2022-01-21 10:31:07.132724 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:479" took too long (465.48232ms) to execute
2022-01-21 10:31:12.246200 W | etcdserver: request "header:<ID:13161627486227643909 > lease_revoke:<id:36a77e77269dbdcb>" with result "size:28" took too long (471.929157ms) to execute
2022-01-21 10:31:12.246316 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:506" took too long (432.362047ms) to execute
2022-01-21 10:31:12.790421 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true " with result "range_response_count:0 size:6" took too long (306.298529ms) to execute
...

everywhere I search there’s only really specific solve, and I don’t find general explaination of that problem.
Some users advice to remove all and try again but I really didn’t get how I’m supposed to clean, every command and things I try to delete old container and k8s configuration let something and that restart all the container, the only way I find to stop it, is to stop docker daemon. If anyone have any idea of what is that problem and what to do, please give here some answer.
thanks.

PS: sometime people answer “try kubectl … on …” please indicate where because all the instalation are on docker node so when I try to use kubectl most of the time it’s ending by “cannot connect to the server …”

No clue on your specific problem (main problems I’ve seen with etcd have been firewall or selinux-related and instructions normally say to turn those off), but if etcd isn’t responding, then kube-apiserver shouldn’t be able to do anything, so kubectl commands also are unlikely to work.

1 Like