I’ll re-deploy on 8.2, or 8.3.
It took some time but it is working!!! Holy cow! Thank you!
Also I need to sort out the proper ports for each docker node.
Is this for all nodes - 22,80,443,6443/tcp ??
It was a bit confusing which set of inbound port rules to follow.
Hi I keep getting this error when try to go to the Cluster Explorer
Can you check the System project and then the pods in the cattle-system
namespace? If not, can you use kubectl and run kubectl -n cattle-system get pods -l app=cattle-cluster-agent -o wide
and kubectl -n cattle-system logs -l app=cattle-cluster-agent
I tried running the commands on the main docker host where rancher lives:
[ameyer@docker02 ~]$ kubectl -n cattle-system get pods -l app=cattle-cluster-agent -o wide
-bash: kubectl: command not found
[ameyer@docker02 ~]$ sudo kubectl -n cattle-system get pods -l app=cattle-cluster-agent -o wide
sudo: kubectl: command not found
[ameyer@docker02 ~]$
Also the main docker container seems to be constantly rebooting.
Please supply the logs of the container so we can see why.
This is the docs page where using kubectl is described: Rancher Docs: Access a Cluster with Kubectl and kubeconfig
Here is the logs from the last 30min. It’s only an excerpt.
I0706 21:57:11.000379 31 trace.go:205] Trace[1267518514]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:10.494) (total time: 505ms):
Trace[1267518514]: ---"Object stored in database" 505ms (21:57:00.000)
Trace[1267518514]: [505.962118ms] [505.962118ms] END
I0706 21:57:11.000505 31 trace.go:205] Trace[2050904377]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:10.497) (total time: 503ms):
Trace[2050904377]: ---"Transaction committed" 502ms (21:57:00.000)
Trace[2050904377]: [503.086686ms] [503.086686ms] END
I0706 21:57:11.000584 31 trace.go:205] Trace[887678719]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:10.496) (total time: 504ms):
Trace[887678719]: ---"Transaction committed" 503ms (21:57:00.000)
Trace[887678719]: [504.386031ms] [504.386031ms] END
I0706 21:57:11.000607 31 trace.go:205] Trace[1531032189]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:10.497) (total time: 503ms):
Trace[1531032189]: ---"Object stored in database" 503ms (21:57:00.000)
Trace[1531032189]: [503.565944ms] [503.565944ms] END
I0706 21:57:11.000691 31 trace.go:205] Trace[548481797]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:10.496) (total time: 504ms):
Trace[548481797]: ---"Object stored in database" 504ms (21:57:00.000)
Trace[548481797]: [504.612483ms] [504.612483ms] END
I0706 21:57:11.001004 31 trace.go:205] Trace[681934126]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:10.496) (total time: 504ms):
Trace[681934126]: ---"Transaction committed" 504ms (21:57:00.000)
Trace[681934126]: [504.62889ms] [504.62889ms] END
I0706 21:57:11.001129 31 trace.go:205] Trace[133157293]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:10.496) (total time: 504ms):
Trace[133157293]: ---"Object stored in database" 504ms (21:57:00.001)
Trace[133157293]: [504.857411ms] [504.857411ms] END
2021-07-06 21:57:11.660899 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:473" took too long (656.170696ms) to execute
2021-07-06 21:57:11.661183 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:497" took too long (656.428856ms) to execute
2021-07-06 21:57:11.661260 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (1.04189406s) to execute
2021-07-06 21:57:11.661562 W | etcdserver: read-only range request "key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.075995978s) to execute
2021-07-06 21:57:11.661754 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:479" took too long (1.162893415s) to execute
I0706 21:57:11.662722 31 trace.go:205] Trace[1606754118]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:10.618) (total time: 1044ms):
Trace[1606754118]: ---"About to write a response" 1043ms (21:57:00.662)
Trace[1606754118]: [1.044071157s] [1.044071157s] END
I0706 21:57:11.664107 31 trace.go:205] Trace[1599446468]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:11.002) (total time: 661ms):
Trace[1599446468]: ---"About to write a response" 661ms (21:57:00.663)
Trace[1599446468]: [661.705019ms] [661.705019ms] END
I0706 21:57:11.664691 31 trace.go:205] Trace[181158230]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:11.004) (total time: 660ms):
Trace[181158230]: ---"About to write a response" 660ms (21:57:00.664)
Trace[181158230]: [660.279387ms] [660.279387ms] END
I0706 21:57:11.666287 31 trace.go:205] Trace[1310408242]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:10.498) (total time: 1167ms):
Trace[1310408242]: ---"About to write a response" 1167ms (21:57:00.666)
Trace[1310408242]: [1.167573772s] [1.167573772s] END
2021-07-06 21:57:12.091783 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:7" took too long (335.489936ms) to execute
2021-07-06 21:57:12.754682 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:479" took too long (659.554538ms) to execute
2021-07-06 21:57:12.754897 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (924.378641ms) to execute
2021-07-06 21:57:12.755021 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:7" took too long (958.879609ms) to execute
2021-07-06 21:57:12.755163 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/templatecontents/\" range_end:\"/registry/management.cattle.io/templatecontents0\" count_only:true " with result "range_response_count:0 size:7" took too long (993.287602ms) to execute
2021-07-06 21:57:12.755284 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (256.984072ms) to execute
I0706 21:57:12.755753 31 trace.go:205] Trace[701981772]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:12.094) (total time: 660ms):
Trace[701981772]: ---"About to write a response" 660ms (21:57:00.755)
Trace[701981772]: [660.903112ms] [660.903112ms] END
I0706 21:57:12.756164 31 trace.go:205] Trace[829160891]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:11.830) (total time: 925ms):
Trace[829160891]: ---"About to write a response" 925ms (21:57:00.756)
Trace[829160891]: [925.898751ms] [925.898751ms] END
2021-07-06 21:57:13.230977 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (228.134756ms) to execute
2021-07-06 21:57:13.617486 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-controller-lock\" " with result "range_response_count:1 size:578" took too long (614.486448ms) to execute
2021-07-06 21:57:13.617625 W | etcdserver: read-only range request "key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true " with result "range_response_count:0 size:7" took too long (603.660535ms) to execute
2021-07-06 21:57:13.617775 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:541" took too long (613.205021ms) to execute
2021-07-06 21:57:13.617808 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:595" took too long (613.329831ms) to execute
I0706 21:57:13.618207 31 trace.go:205] Trace[98128158]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:13.004) (total time: 613ms):
Trace[98128158]: ---"About to write a response" 613ms (21:57:00.618)
Trace[98128158]: [613.929856ms] [613.929856ms] END
I0706 21:57:13.618339 31 trace.go:205] Trace[1186916079]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:13.003) (total time: 614ms):
Trace[1186916079]: ---"About to write a response" 614ms (21:57:00.618)
Trace[1186916079]: [614.831015ms] [614.831015ms] END
I0706 21:57:13.619224 31 trace.go:205] Trace[646770988]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:13.002) (total time: 616ms):
Trace[646770988]: ---"About to write a response" 616ms (21:57:00.619)
Trace[646770988]: [616.449031ms] [616.449031ms] END
2021-07-06 21:57:14.048815 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (425.371005ms) to execute
2021-07-06 21:57:14.700321 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:122" took too long (749.21515ms) to execute
2021-07-06 21:57:14.700491 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (1.03047515s) to execute
I0706 21:57:14.701193 31 trace.go:205] Trace[1218106145]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:13.669) (total time: 1031ms):
Trace[1218106145]: ---"About to write a response" 1031ms (21:57:00.701)
Trace[1218106145]: [1.031438539s] [1.031438539s] END
I0706 21:57:14.701908 31 trace.go:205] Trace[270376203]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:14.051) (total time: 650ms):
Trace[270376203]: ---"Transaction committed" 650ms (21:57:00.701)
Trace[270376203]: [650.564341ms] [650.564341ms] END
I0706 21:57:14.702011 31 trace.go:205] Trace[360622555]: "Update" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:14.051) (total time: 650ms):
Trace[360622555]: ---"Object stored in database" 650ms (21:57:00.701)
Trace[360622555]: [650.794032ms] [650.794032ms] END
2021-07-06 21:57:15.253070 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:7850" took too long (792.08136ms) to execute
2021-07-06 21:57:15.253332 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:593" took too long (1.15839124s) to execute
I0706 21:57:15.254014 31 trace.go:205] Trace[451303999]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:14.094) (total time: 1159ms):
Trace[451303999]: ---"About to write a response" 1159ms (21:57:00.253)
Trace[451303999]: [1.159417806s] [1.159417806s] END
I0706 21:57:15.254145 31 trace.go:205] Trace[115673186]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:14.703) (total time: 550ms):
Trace[115673186]: ---"Transaction committed" 549ms (21:57:00.254)
Trace[115673186]: [550.372097ms] [550.372097ms] END
I0706 21:57:15.254280 31 trace.go:205] Trace[1068531275]: "Update" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:14.703) (total time: 550ms):
Trace[1068531275]: ---"Object stored in database" 550ms (21:57:00.254)
Trace[1068531275]: [550.664863ms] [550.664863ms] END
2021-07-06 21:57:15.777809 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (1.016375399s) to execute
2021-07-06 21:57:15.778914 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (542.257573ms) to execute
2021-07-06 21:57:15.780145 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:987" took too long (899.778486ms) to execute
2021-07-06 21:57:15.782899 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:575" took too long (546.328055ms) to execute
2021-07-06 21:57:15.783770 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:4 size:4726" took too long (1.08017329s) to execute
2021-07-06 21:57:15.784568 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (1.078603101s) to execute
2021-07-06 21:57:15.785697 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:4 size:4726" took too long (529.989587ms) to execute
2021-07-06 21:57:15.785742 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (164.540287ms) to execute
2021-07-06 21:57:15.785928 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:497" took too long (529.324836ms) to execute
I0706 21:57:15.790293 31 trace.go:205] Trace[580248147]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:15.236) (total time: 554ms):
Trace[580248147]: ---"About to write a response" 553ms (21:57:00.790)
Trace[580248147]: [554.011169ms] [554.011169ms] END
I0706 21:57:15.790560 31 trace.go:205] Trace[2145752629]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:14.761) (total time: 1029ms):
Trace[2145752629]: ---"About to write a response" 1029ms (21:57:00.790)
Trace[2145752629]: [1.029425276s] [1.029425276s] END
I0706 21:57:15.790848 31 trace.go:205] Trace[1137798453]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:15.236) (total time: 554ms):
Trace[1137798453]: ---"About to write a response" 554ms (21:57:00.790)
Trace[1137798453]: [554.593932ms] [554.593932ms] END
I0706 21:57:15.791414 31 trace.go:205] Trace[1794602308]: "Get" url:/api/v1/namespaces/kube-system,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:14.879) (total time: 911ms):
Trace[1794602308]: ---"About to write a response" 911ms (21:57:00.791)
Trace[1794602308]: [911.384819ms] [911.384819ms] END
I0706 21:57:15.791822 31 trace.go:205] Trace[1119931937]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:15.256) (total time: 535ms):
Trace[1119931937]: ---"About to write a response" 535ms (21:57:00.791)
Trace[1119931937]: [535.481001ms] [535.481001ms] END
I0706 21:57:15.796292 31 trace.go:205] Trace[443522968]: "List etcd3" key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:15.254) (total time: 542ms):
Trace[443522968]: [542.14186ms] [542.14186ms] END
I0706 21:57:15.796661 31 trace.go:205] Trace[816138760]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:14.705) (total time: 1090ms):
Trace[816138760]: ---"About to write a response" 1090ms (21:57:00.796)
Trace[816138760]: [1.090981527s] [1.090981527s] END
I0706 21:57:15.797021 31 trace.go:205] Trace[4702159]: "List etcd3" key:/services/specs,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:14.702) (total time: 1094ms):
Trace[4702159]: [1.094376584s] [1.094376584s] END
I0706 21:57:15.797457 31 trace.go:205] Trace[833075504]: "List" url:/api/v1/services,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:15.254) (total time: 543ms):
Trace[833075504]: ---"Listing from storage done" 542ms (21:57:00.796)
Trace[833075504]: [543.324713ms] [543.324713ms] END
I0706 21:57:15.797620 31 trace.go:205] Trace[672767263]: "List" url:/api/v1/services,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:14.702) (total time: 1094ms):
Trace[672767263]: ---"Listing from storage done" 1094ms (21:57:00.797)
Trace[672767263]: [1.094998555s] [1.094998555s] END
2021-07-06 21:57:16.228069 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:122" took too long (424.544918ms) to execute
2021-07-06 21:57:16.789645 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:541" took too long (736.383339ms) to execute
2021-07-06 21:57:16.789820 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-controller-lock\" " with result "range_response_count:1 size:578" took too long (737.791565ms) to execute
2021-07-06 21:57:16.789947 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:7850" took too long (983.672331ms) to execute
I0706 21:57:16.790667 31 trace.go:205] Trace[809874812]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:16.051) (total time: 738ms):
Trace[809874812]: ---"About to write a response" 738ms (21:57:00.790)
Trace[809874812]: [738.939414ms] [738.939414ms] END
I0706 21:57:16.792400 31 trace.go:205] Trace[965334875]: "GuaranteedUpdate etcd3" type:*core.RangeAllocation (06-Jul-2021 21:57:15.802) (total time: 989ms):
Trace[965334875]: ---"initial value restored" 989ms (21:57:00.792)
Trace[965334875]: [989.682492ms] [989.682492ms] END
I0706 21:57:16.792449 31 trace.go:205] Trace[737583111]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:16.052) (total time: 739ms):
Trace[737583111]: ---"About to write a response" 739ms (21:57:00.792)
Trace[737583111]: [739.595799ms] [739.595799ms] END
2021-07-06 21:57:17.475373 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:497" took too long (1.242902204s) to execute
2021-07-06 21:57:17.475497 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:479" took too long (1.243023697s) to execute
I0706 21:57:17.475992 31 trace.go:205] Trace[1332799388]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:16.795) (total time: 680ms):
Trace[1332799388]: ---"Transaction committed" 679ms (21:57:00.475)
Trace[1332799388]: [680.264813ms] [680.264813ms] END
I0706 21:57:17.476080 31 trace.go:205] Trace[971257173]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:16.231) (total time: 1244ms):
Trace[971257173]: ---"About to write a response" 1243ms (21:57:00.475)
Trace[971257173]: [1.244073345s] [1.244073345s] END
I0706 21:57:17.476163 31 trace.go:205] Trace[1761668150]: "Update" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:16.795) (total time: 680ms):
Trace[1761668150]: ---"Object stored in database" 680ms (21:57:00.476)
Trace[1761668150]: [680.599791ms] [680.599791ms] END
I0706 21:57:17.476601 31 trace.go:205] Trace[1205894607]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:16.232) (total time: 1244ms):
Trace[1205894607]: ---"About to write a response" 1244ms (21:57:00.476)
Trace[1205894607]: [1.244405543s] [1.244405543s] END
I0706 21:57:17.476947 31 trace.go:205] Trace[2069253048]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:16.793) (total time: 683ms):
Trace[2069253048]: ---"Transaction committed" 682ms (21:57:00.476)
Trace[2069253048]: [683.242654ms] [683.242654ms] END
I0706 21:57:17.477081 31 trace.go:205] Trace[1502309693]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:16.793) (total time: 683ms):
Trace[1502309693]: ---"Object stored in database" 683ms (21:57:00.476)
Trace[1502309693]: [683.536351ms] [683.536351ms] END
2021-07-06 21:57:18.060510 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (803.383109ms) to execute
I0706 21:57:18.061216 31 trace.go:205] Trace[659904076]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:17.480) (total time: 581ms):
Trace[659904076]: ---"Transaction committed" 580ms (21:57:00.061)
Trace[659904076]: [581.11014ms] [581.11014ms] END
I0706 21:57:18.061327 31 trace.go:205] Trace[553576626]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:17.256) (total time: 804ms):
Trace[553576626]: ---"About to write a response" 804ms (21:57:00.061)
Trace[553576626]: [804.592914ms] [804.592914ms] END
I0706 21:57:18.061338 31 trace.go:205] Trace[1175562118]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:17.479) (total time: 581ms):
Trace[1175562118]: ---"Object stored in database" 581ms (21:57:00.061)
Trace[1175562118]: [581.371931ms] [581.371931ms] END
I0706 21:57:18.061404 31 trace.go:205] Trace[621472487]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:17.480) (total time: 581ms):
Trace[621472487]: ---"Transaction committed" 580ms (21:57:00.061)
Trace[621472487]: [581.282243ms] [581.282243ms] END
I0706 21:57:18.061497 31 trace.go:205] Trace[1675992331]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:17.479) (total time: 581ms):
Trace[1675992331]: ---"Object stored in database" 581ms (21:57:00.061)
Trace[1675992331]: [581.482649ms] [581.482649ms] END
2021-07-06 21:57:18.547063 W | etcdserver: read-only range request "key:\"/registry/rancher.cattle.io/roletemplatebindings/\" range_end:\"/registry/rancher.cattle.io/roletemplatebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (578.100784ms) to execute
2021-07-06 21:57:18.547160 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (742.132297ms) to execute
2021-07-06 21:57:18.547429 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:979" took too long (565.201837ms) to execute
I0706 21:57:18.547667 31 trace.go:205] Trace[520940757]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:17.804) (total time: 743ms):
Trace[520940757]: ---"About to write a response" 742ms (21:57:00.547)
Trace[520940757]: [743.026164ms] [743.026164ms] END
I0706 21:57:18.550575 31 trace.go:205] Trace[982337603]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:17.981) (total time: 568ms):
Trace[982337603]: ---"About to write a response" 568ms (21:57:00.550)
Trace[982337603]: [568.591444ms] [568.591444ms] END
2021-07-06 21:57:19.044274 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:595" took too long (811.722904ms) to execute
2021-07-06 21:57:19.044405 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (813.290564ms) to execute
2021-07-06 21:57:19.044456 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (813.362307ms) to execute
I0706 21:57:19.044948 31 trace.go:205] Trace[152768182]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:18.230) (total time: 814ms):
Trace[152768182]: ---"About to write a response" 814ms (21:57:00.044)
Trace[152768182]: [814.164753ms] [814.164753ms] END
I0706 21:57:19.045378 31 trace.go:205] Trace[536423852]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:18.232) (total time: 813ms):
Trace[536423852]: ---"About to write a response" 813ms (21:57:00.045)
Trace[536423852]: [813.108292ms] [813.108292ms] END
I0706 21:57:19.045748 31 trace.go:205] Trace[1431792284]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:18.230) (total time: 814ms):
Trace[1431792284]: ---"About to write a response" 814ms (21:57:00.045)
Trace[1431792284]: [814.977092ms] [814.977092ms] END
2021-07-06 21:57:19.585930 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:7" took too long (972.940861ms) to execute
2021-07-06 21:57:19.586141 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:601" took too long (1.034354226s) to execute
I0706 21:57:19.586551 31 trace.go:205] Trace[786113979]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (06-Jul-2021 21:57:18.612) (total time: 973ms):
Trace[786113979]: [973.925354ms] [973.925354ms] END
I0706 21:57:19.586663 31 trace.go:205] Trace[827753277]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:18.551) (total time: 1035ms):
Trace[827753277]: ---"About to write a response" 1034ms (21:57:00.586)
Trace[827753277]: [1.035057861s] [1.035057861s] END
I0706 21:57:19.586664 31 trace.go:205] Trace[390580393]: "List" url:/apis/batch/v1/jobs,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (06-Jul-2021 21:57:18.612) (total time: 974ms):
Trace[390580393]: ---"Listing from storage done" 974ms (21:57:00.586)
Trace[390580393]: [974.086117ms] [974.086117ms] END
I0706 21:57:19.586941 31 trace.go:205] Trace[762916313]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:19.048) (total time: 538ms):
Trace[762916313]: ---"Transaction committed" 537ms (21:57:00.586)
Trace[762916313]: [538.403062ms] [538.403062ms] END
I0706 21:57:19.587101 31 trace.go:205] Trace[74506283]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:19.048) (total time: 538ms):
Trace[74506283]: ---"Object stored in database" 538ms (21:57:00.586)
Trace[74506283]: [538.695017ms] [538.695017ms] END
I0706 21:57:19.587657 31 trace.go:205] Trace[1280619352]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:19.049) (total time: 537ms):
Trace[1280619352]: ---"Transaction committed" 537ms (21:57:00.587)
Trace[1280619352]: [537.913934ms] [537.913934ms] END
I0706 21:57:19.587778 31 trace.go:205] Trace[1349984377]: "Update" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:19.049) (total time: 538ms):
Trace[1349984377]: ---"Object stored in database" 537ms (21:57:00.587)
Trace[1349984377]: [538.172395ms] [538.172395ms] END
2021-07-06 21:57:20.171662 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-controller-lock\" " with result "range_response_count:1 size:578" took too long (689.878761ms) to execute
2021-07-06 21:57:20.171702 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:575" took too long (107.224203ms) to execute
2021-07-06 21:57:20.171835 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:7" took too long (582.824183ms) to execute
2021-07-06 21:57:20.171945 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.2\" " with result "range_response_count:1 size:135" took too long (582.849697ms) to execute
2021-07-06 21:57:20.171984 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:541" took too long (690.908295ms) to execute
2021-07-06 21:57:20.172089 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:593" took too long (107.54985ms) to execute
I0706 21:57:20.172388 31 trace.go:205] Trace[1674728935]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (06-Jul-2021 21:57:19.588) (total time: 584ms):
Trace[1674728935]: [584.058001ms] [584.058001ms] END
I0706 21:57:20.172517 31 trace.go:205] Trace[707666264]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (06-Jul-2021 21:57:19.588) (total time: 584ms):
Trace[707666264]: ---"Listing from storage done" 584ms (21:57:00.172)
Trace[707666264]: [584.228551ms] [584.228551ms] END
I0706 21:57:20.172599 31 trace.go:205] Trace[1395236943]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:19.480) (total time: 691ms):
Trace[1395236943]: ---"About to write a response" 691ms (21:57:00.172)
Trace[1395236943]: [691.834438ms] [691.834438ms] END
I0706 21:57:20.172691 31 trace.go:205] Trace[297402645]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:19.481) (total time: 691ms):
Trace[297402645]: ---"About to write a response" 691ms (21:57:00.172)
Trace[297402645]: [691.197252ms] [691.197252ms] END
2021-07-06 21:57:20.173474 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (1.123855596s) to execute
I0706 21:57:20.173938 31 trace.go:205] Trace[1515451410]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:19.048) (total time: 1125ms):
Trace[1515451410]: ---"About to write a response" 1125ms (21:57:00.173)
Trace[1515451410]: [1.125447479s] [1.125447479s] END
2021-07-06 21:57:20.658862 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:478" took too long (481.268877ms) to execute
I0706 21:57:20.660410 31 trace.go:205] Trace[672138864]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (06-Jul-2021 21:57:19.587) (total time: 1072ms):
Trace[672138864]: ---"initial value restored" 584ms (21:57:00.172)
Trace[672138864]: ---"Transaction committed" 485ms (21:57:00.660)
Trace[672138864]: [1.072939552s] [1.072939552s] END
2021-07-06 21:57:21.166329 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (614.649097ms) to execute
2021-07-06 21:57:21.166571 W | etcdserver: read-only range request "key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true " with result "range_response_count:0 size:9" took too long (772.856833ms) to execute
2021-07-06 21:57:21.166642 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:496" took too long (988.986631ms) to execute
I0706 21:57:21.167110 31 trace.go:205] Trace[793690126]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:20.551) (total time: 615ms):
Trace[793690126]: ---"About to write a response" 615ms (21:57:00.166)
Trace[793690126]: [615.759159ms] [615.759159ms] END
I0706 21:57:21.167146 31 trace.go:205] Trace[1082882697]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:20.176) (total time: 990ms):
Trace[1082882697]: ---"About to write a response" 990ms (21:57:00.167)
Trace[1082882697]: [990.219747ms] [990.219747ms] END
I0706 21:57:21.167274 31 trace.go:205] Trace[359090481]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:20.661) (total time: 505ms):
Trace[359090481]: ---"Transaction committed" 504ms (21:57:00.167)
Trace[359090481]: [505.369495ms] [505.369495ms] END
I0706 21:57:21.167372 31 trace.go:205] Trace[1362480037]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:20.661) (total time: 505ms):
Trace[1362480037]: ---"Object stored in database" 505ms (21:57:00.167)
Trace[1362480037]: [505.571999ms] [505.571999ms] END
2021-07-06 21:57:21.796577 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/dynamicschemas/\" range_end:\"/registry/management.cattle.io/dynamicschemas0\" count_only:true " with result "range_response_count:0 size:9" took too long (670.068507ms) to execute
2021-07-06 21:57:21.796677 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/globaldnsproviders/\" range_end:\"/registry/management.cattle.io/globaldnsproviders0\" count_only:true " with result "range_response_count:0 size:7" took too long (712.426929ms) to execute
2021-07-06 21:57:21.796758 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:409" took too long (1.133638517s) to execute
2021-07-06 21:57:21.796864 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (747.640418ms) to execute
2021-07-06 21:57:21.796888 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (1.133881653s) to execute
I0706 21:57:21.797465 31 trace.go:205] Trace[1146613115]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:21.048) (total time: 748ms):
Trace[1146613115]: ---"About to write a response" 748ms (21:57:00.797)
Trace[1146613115]: [748.835851ms] [748.835851ms] END
I0706 21:57:21.797465 31 trace.go:205] Trace[1007837260]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:20.662) (total time: 1135ms):
Trace[1007837260]: ---"About to write a response" 1134ms (21:57:00.797)
Trace[1007837260]: [1.13501578s] [1.13501578s] END
I0706 21:57:21.798002 31 trace.go:205] Trace[8559836]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:20.660) (total time: 1136ms):
Trace[8559836]: ---"About to write a response" 1136ms (21:57:00.797)
Trace[8559836]: [1.136977317s] [1.136977317s] END
I0706 21:57:21.798247 31 trace.go:205] Trace[373588853]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:21.170) (total time: 627ms):
Trace[373588853]: ---"Transaction committed" 627ms (21:57:00.798)
Trace[373588853]: [627.432465ms] [627.432465ms] END
I0706 21:57:21.798348 31 trace.go:205] Trace[1423966534]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:21.170) (total time: 627ms):
Trace[1423966534]: ---"Object stored in database" 627ms (21:57:00.798)
Trace[1423966534]: [627.633313ms] [627.633313ms] END
I0706 21:57:21.798555 31 trace.go:205] Trace[364336730]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:21.169) (total time: 628ms):
Trace[364336730]: ---"Transaction committed" 628ms (21:57:00.798)
Trace[364336730]: [628.837959ms] [628.837959ms] END
I0706 21:57:21.798664 31 trace.go:205] Trace[1841980546]: "Update" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:21.169) (total time: 629ms):
Trace[1841980546]: ---"Object stored in database" 628ms (21:57:00.798)
Trace[1841980546]: [629.108536ms] [629.108536ms] END
2021-07-06 21:57:22.592839 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (1.002586373s) to execute
2021-07-06 21:57:22.592935 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:478" took too long (1.422458717s) to execute
2021-07-06 21:57:22.593014 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (1.002744295s) to execute
2021-07-06 21:57:22.593072 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.366510731s) to execute
I0706 21:57:22.593629 31 trace.go:205] Trace[1892141726]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:21.170) (total time: 1423ms):
Trace[1892141726]: ---"About to write a response" 1423ms (21:57:00.593)
Trace[1892141726]: [1.423563603s] [1.423563603s] END
I0706 21:57:22.593986 31 trace.go:205] Trace[408221151]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:21.589) (total time: 1003ms):
Trace[408221151]: ---"About to write a response" 1003ms (21:57:00.593)
Trace[408221151]: [1.003967233s] [1.003967233s] END
I0706 21:57:22.594331 31 trace.go:205] Trace[584970609]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:21.589) (total time: 1004ms):
Trace[584970609]: ---"About to write a response" 1004ms (21:57:00.594)
Trace[584970609]: [1.004484802s] [1.004484802s] END
I0706 21:57:22.595112 31 trace.go:205] Trace[193793347]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:21.799) (total time: 795ms):
Trace[193793347]: ---"Transaction committed" 794ms (21:57:00.595)
Trace[193793347]: [795.517559ms] [795.517559ms] END
I0706 21:57:22.595232 31 trace.go:205] Trace[2098339872]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:21.799) (total time: 795ms):
Trace[2098339872]: ---"Object stored in database" 795ms (21:57:00.595)
Trace[2098339872]: [795.766368ms] [795.766368ms] END
I0706 21:57:22.595709 31 trace.go:205] Trace[788672972]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:21.801) (total time: 794ms):
Trace[788672972]: ---"Transaction committed" 794ms (21:57:00.595)
Trace[788672972]: [794.476559ms] [794.476559ms] END
I0706 21:57:22.595801 31 trace.go:205] Trace[340624266]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:21.801) (total time: 794ms):
Trace[340624266]: ---"Object stored in database" 794ms (21:57:00.595)
Trace[340624266]: [794.642811ms] [794.642811ms] END
2021-07-06 21:57:23.012624 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" " with result "range_response_count:24 size:26202" took too long (1.057032465s) to execute
2021-07-06 21:57:23.012907 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.196308065s) to execute
2021-07-06 21:57:23.013106 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:496" took too long (1.210617156s) to execute
2021-07-06 21:57:23.013250 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:135" took too long (1.212047476s) to execute
I0706 21:57:23.014124 31 trace.go:205] Trace[324573018]: "List etcd3" key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (06-Jul-2021 21:57:21.800) (total time: 1213ms):
Trace[324573018]: [1.213361522s] [1.213361522s] END
I0706 21:57:23.014697 31 trace.go:205] Trace[1424825782]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:21.802) (total time: 1212ms):
Trace[1424825782]: ---"About to write a response" 1212ms (21:57:00.014)
Trace[1424825782]: [1.212604157s] [1.212604157s] END
I0706 21:57:23.015588 31 trace.go:205] Trace[1368899352]: "List etcd3" key:/namespaces,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:21.955) (total time: 1060ms):
Trace[1368899352]: [1.060419825s] [1.060419825s] END
I0706 21:57:23.016815 31 trace.go:205] Trace[1165468379]: "List" url:/api/v1/namespaces,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:21.955) (total time: 1061ms):
Trace[1165468379]: ---"Listing from storage done" 1060ms (21:57:00.015)
Trace[1165468379]: [1.061658606s] [1.061658606s] END
2021-07-06 21:57:23.538439 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true " with result "range_response_count:0 size:9" took too long (569.325562ms) to execute
2021-07-06 21:57:23.538626 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/projects/\" range_end:\"/registry/management.cattle.io/projects0\" count_only:true " with result "range_response_count:0 size:9" took too long (758.333401ms) to execute
2021-07-06 21:57:23.538768 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:541" took too long (873.322493ms) to execute
2021-07-06 21:57:23.538897 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-controller-lock\" " with result "range_response_count:1 size:578" took too long (873.831691ms) to execute
I0706 21:57:23.539481 31 trace.go:205] Trace[601909743]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:22.664) (total time: 874ms):
Trace[601909743]: ---"About to write a response" 874ms (21:57:00.539)
Trace[601909743]: [874.731181ms] [874.731181ms] END
I0706 21:57:23.539844 31 trace.go:205] Trace[1076293308]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:23.018) (total time: 521ms):
Trace[1076293308]: ---"Transaction committed" 520ms (21:57:00.539)
Trace[1076293308]: [521.284095ms] [521.284095ms] END
I0706 21:57:23.539943 31 trace.go:205] Trace[645202243]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:23.018) (total time: 521ms):
Trace[645202243]: ---"Object stored in database" 521ms (21:57:00.539)
Trace[645202243]: [521.491444ms] [521.491444ms] END
I0706 21:57:23.540285 31 trace.go:205] Trace[1652201694]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:22.665) (total time: 875ms):
Trace[1652201694]: ---"About to write a response" 875ms (21:57:00.540)
Trace[1652201694]: [875.082017ms] [875.082017ms] END
2021-07-06 21:57:23.996322 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/c-mt7kb/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/c-mt7kb0\" " with result "range_response_count:0 size:7" took too long (975.875636ms) to execute
I0706 21:57:23.996909 31 trace.go:205] Trace[765436167]: "List etcd3" key:/fleet.cattle.io/bundledeployments/c-mt7kb,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:23.020) (total time: 976ms):
Trace[765436167]: [976.749497ms] [976.749497ms] END
I0706 21:57:23.997120 31 trace.go:205] Trace[1894095975]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/c-mt7kb/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:23.020) (total time: 977ms):
Trace[1894095975]: ---"Listing from storage done" 976ms (21:57:00.996)
Trace[1894095975]: [977.019164ms] [977.019164ms] END
2021-07-06 21:57:24.000546 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:473" took too long (983.192041ms) to execute
I0706 21:57:24.000882 31 trace.go:205] Trace[501719454]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:23.016) (total time: 984ms):
Trace[501719454]: ---"About to write a response" 984ms (21:57:00.000)
Trace[501719454]: [984.170266ms] [984.170266ms] END
2021-07-06 21:57:24.504833 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/cattle-global-data/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/cattle-global-data0\" " with result "range_response_count:0 size:7" took too long (504.605669ms) to execute
2021-07-06 21:57:24.504957 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/roletemplates/\" range_end:\"/registry/management.cattle.io/roletemplates0\" count_only:true " with result "range_response_count:0 size:9" took too long (715.937144ms) to execute
2021-07-06 21:57:24.505014 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (871.543195ms) to execute
2021-07-06 21:57:24.505167 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/tokens/\" range_end:\"/registry/management.cattle.io/tokens0\" count_only:true " with result "range_response_count:0 size:9" took too long (268.552945ms) to execute
2021-07-06 21:57:24.505219 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:9" took too long (775.258101ms) to execute
2021-07-06 21:57:24.505266 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundles/\" range_end:\"/registry/fleet.cattle.io/bundles0\" count_only:true " with result "range_response_count:0 size:9" took too long (590.355717ms) to execute
2021-07-06 21:57:24.505479 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (704.270251ms) to execute
I0706 21:57:24.505692 31 trace.go:205] Trace[729407900]: "List etcd3" key:/fleet.cattle.io/bundledeployments/cattle-global-data,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:23.998) (total time: 506ms):
Trace[729407900]: [506.743261ms] [506.743261ms] END
I0706 21:57:24.505882 31 trace.go:205] Trace[910522536]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/cattle-global-data/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:23.998) (total time: 506ms):
Trace[910522536]: ---"Listing from storage done" 506ms (21:57:00.505)
Trace[910522536]: [506.988868ms] [506.988868ms] END
I0706 21:57:24.506062 31 trace.go:205] Trace[189251580]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:23.800) (total time: 705ms):
Trace[189251580]: ---"About to write a response" 704ms (21:57:00.505)
Trace[189251580]: [705.080669ms] [705.080669ms] END
2021/07/06 21:57:24 [ERROR] error syncing 'cattle-logging/rancher-logging-fluentd-linux': handler workloadServiceGenerationController: cannot find app namespace in labels of cattle-logging, requeuing
2021/07/06 21:57:24 [ERROR] error syncing 'cattle-logging/rancher-logging-log-aggregator-linux': handler workloadServiceGenerationController: cannot find app namespace in labels of cattle-logging, requeuing
2021-07-06 21:57:25.058339 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/cattle-system/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/cattle-system0\" " with result "range_response_count:0 size:7" took too long (548.18545ms) to execute
I0706 21:57:25.059308 31 trace.go:205] Trace[424076345]: "List etcd3" key:/fleet.cattle.io/bundledeployments/cattle-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:24.509) (total time: 549ms):
Trace[424076345]: [549.405863ms] [549.405863ms] END
I0706 21:57:25.059472 31 trace.go:205] Trace[1283943853]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/cattle-system/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:24.509) (total time: 549ms):
Trace[1283943853]: ---"Listing from storage done" 549ms (21:57:00.059)
Trace[1283943853]: [549.620991ms] [549.620991ms] END
2021-07-06 21:57:25.554892 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (538.037122ms) to execute
2021-07-06 21:57:25.554966 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:9" took too long (777.365572ms) to execute
2021-07-06 21:57:25.554997 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (955.968044ms) to execute
2021-07-06 21:57:25.555085 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:575" took too long (537.686682ms) to execute
2021-07-06 21:57:25.555139 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (536.778779ms) to execute
2021-07-06 21:57:25.555170 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/cluster-fleet-default-c-fbkpm-dc16f03b27e8/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/cluster-fleet-default-c-fbkpm-dc16f03b27e80\" " with result "range_response_count:0 size:7" took too long (494.048543ms) to execute
2021-07-06 21:57:25.555226 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:595" took too long (955.849117ms) to execute
I0706 21:57:25.555642 31 trace.go:205] Trace[1659740233]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:24.598) (total time: 957ms):
Trace[1659740233]: ---"About to write a response" 957ms (21:57:00.555)
Trace[1659740233]: [957.406397ms] [957.406397ms] END
I0706 21:57:25.555655 31 trace.go:205] Trace[1763334349]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:25.017) (total time: 537ms):
Trace[1763334349]: ---"About to write a response" 537ms (21:57:00.555)
Trace[1763334349]: [537.800673ms] [537.800673ms] END
I0706 21:57:25.555923 31 trace.go:205] Trace[975052670]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:25.017) (total time: 538ms):
Trace[975052670]: ---"About to write a response" 538ms (21:57:00.555)
Trace[975052670]: [538.733529ms] [538.733529ms] END
I0706 21:57:25.556224 31 trace.go:205] Trace[924882632]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:25.016) (total time: 539ms):
Trace[924882632]: ---"About to write a response" 539ms (21:57:00.556)
Trace[924882632]: [539.644095ms] [539.644095ms] END
I0706 21:57:25.557263 31 trace.go:205] Trace[261187034]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:24.598) (total time: 958ms):
Trace[261187034]: ---"About to write a response" 958ms (21:57:00.557)
Trace[261187034]: [958.525905ms] [958.525905ms] END
2021-07-06 21:57:25.562004 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true " with result "range_response_count:0 size:9" took too long (877.629608ms) to execute
2021-07-06 21:57:26.030049 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:496" took too long (468.783108ms) to execute
2021-07-06 21:57:26.616312 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/fleetworkspaces/\" range_end:\"/registry/management.cattle.io/fleetworkspaces0\" count_only:true " with result "range_response_count:0 size:9" took too long (942.384903ms) to execute
2021-07-06 21:57:26.616430 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:478" took too long (1.052794065s) to execute
2021-07-06 21:57:26.616485 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-controller-lock\" " with result "range_response_count:1 size:578" took too long (615.410102ms) to execute
2021-07-06 21:57:26.616576 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/cluster-fleet-local-local-1a3d67d0a899/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/cluster-fleet-local-local-1a3d67d0a8990\" " with result "range_response_count:1 size:4048" took too long (1.054418121s) to execute
2021-07-06 21:57:26.616598 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:541" took too long (615.805305ms) to execute
2021-07-06 21:57:26.616716 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/gitreporestrictions/\" range_end:\"/registry/fleet.cattle.io/gitreporestrictions0\" count_only:true " with result "range_response_count:0 size:7" took too long (822.484865ms) to execute
2021-07-06 21:57:26.616810 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (1.054741806s) to execute
I0706 21:57:26.617692 31 trace.go:205] Trace[1281454136]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:26.033) (total time: 584ms):
Trace[1281454136]: ---"Transaction committed" 583ms (21:57:00.617)
Trace[1281454136]: [584.042691ms] [584.042691ms] END
I0706 21:57:26.617826 31 trace.go:205] Trace[2005303491]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:26.033) (total time: 584ms):
Trace[2005303491]: ---"Object stored in database" 584ms (21:57:00.617)
Trace[2005303491]: [584.337008ms] [584.337008ms] END
I0706 21:57:26.618679 31 trace.go:205] Trace[1981551208]: "Get" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:26.000) (total time: 618ms):
Trace[1981551208]: ---"About to write a response" 618ms (21:57:00.618)
Trace[1981551208]: [618.255882ms] [618.255882ms] END
I0706 21:57:26.619136 31 trace.go:205] Trace[105024771]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:25.561) (total time: 1057ms):
Trace[105024771]: ---"About to write a response" 1057ms (21:57:00.619)
Trace[105024771]: [1.057359359s] [1.057359359s] END
I0706 21:57:26.619407 31 trace.go:205] Trace[1431742688]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:26.000) (total time: 618ms):
Trace[1431742688]: ---"About to write a response" 618ms (21:57:00.619)
Trace[1431742688]: [618.524217ms] [618.524217ms] END
I0706 21:57:26.619671 31 trace.go:205] Trace[1560055087]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:25.562) (total time: 1057ms):
Trace[1560055087]: ---"About to write a response" 1056ms (21:57:00.619)
Trace[1560055087]: [1.057041348s] [1.057041348s] END
I0706 21:57:26.620527 31 trace.go:205] Trace[2082372938]: "List etcd3" key:/fleet.cattle.io/bundledeployments/cluster-fleet-local-local-1a3d67d0a899,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:25.561) (total time: 1058ms):
Trace[2082372938]: [1.058915142s] [1.058915142s] END
I0706 21:57:26.621000 31 trace.go:205] Trace[61109469]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/cluster-fleet-local-local-1a3d67d0a899/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:25.561) (total time: 1059ms):
Trace[61109469]: ---"Listing from storage done" 1058ms (21:57:00.620)
Trace[61109469]: [1.059481757s] [1.059481757s] END
2021-07-06 21:57:27.268109 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/rancher-controller-lock\" " with result "range_response_count:1 size:581" took too long (755.495607ms) to execute
2021-07-06 21:57:27.268351 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:9" took too long (887.47713ms) to execute
2021-07-06 21:57:27.268478 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/contents/\" range_end:\"/registry/fleet.cattle.io/contents0\" count_only:true " with result "range_response_count:0 size:9" took too long (996.625033ms) to execute
I0706 21:57:27.268615 31 trace.go:205] Trace[1391504418]: "Get" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:26.512) (total time: 756ms):
Trace[1391504418]: ---"About to write a response" 756ms (21:57:00.268)
Trace[1391504418]: [756.329687ms] [756.329687ms] END
2021-07-06 21:57:27.268627 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/clusterregistrations/\" range_end:\"/registry/fleet.cattle.io/clusterregistrations0\" count_only:true " with result "range_response_count:0 size:9" took too long (1.205005894s) to execute
I0706 21:57:27.269398 31 trace.go:205] Trace[732563269]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:26.622) (total time: 646ms):
Trace[732563269]: ---"Transaction committed" 646ms (21:57:00.269)
Trace[732563269]: [646.707793ms] [646.707793ms] END
I0706 21:57:27.271109 31 trace.go:205] Trace[327809226]: "Update" url:/api/v1/namespaces/kube-system/configmaps/cattle-controllers,user-agent:rancher/v0.0.0 (linux/amd64) kubernetes/$Format,client:127.0.0.1 (06-Jul-2021 21:57:26.622) (total time: 648ms):
Trace[327809226]: ---"Object stored in database" 648ms (21:57:00.270)
Trace[327809226]: [648.546491ms] [648.546491ms] END
I0706 21:57:27.269438 31 trace.go:205] Trace[915946027]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:26.622) (total time: 647ms):
Trace[915946027]: ---"Transaction committed" 646ms (21:57:00.269)
Trace[915946027]: [647.163551ms] [647.163551ms] END
I0706 21:57:27.269548 31 trace.go:205] Trace[1222921518]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:26.625) (total time: 644ms):
Trace[1222921518]: ---"Transaction committed" 643ms (21:57:00.269)
Trace[1222921518]: [644.096812ms] [644.096812ms] END
I0706 21:57:27.271259 31 trace.go:205] Trace[1760661400]: "Update" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:26.622) (total time: 649ms):
Trace[1760661400]: ---"Object stored in database" 648ms (21:57:00.271)
Trace[1760661400]: [649.063434ms] [649.063434ms] END
I0706 21:57:27.271340 31 trace.go:205] Trace[220202022]: "Update" url:/api/v1/namespaces/fleet-system/configmaps/fleet-controller-lock,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:26.625) (total time: 646ms):
Trace[220202022]: ---"Object stored in database" 645ms (21:57:00.271)
Trace[220202022]: [646.034381ms] [646.034381ms] END
I0706 21:57:27.269559 31 trace.go:205] Trace[1244201710]: "GuaranteedUpdate etcd3" type:*core.Endpoints (06-Jul-2021 21:57:26.621) (total time: 647ms):
Trace[1244201710]: ---"Transaction committed" 647ms (21:57:00.269)
Trace[1244201710]: [647.800885ms] [647.800885ms] END
I0706 21:57:27.271546 31 trace.go:205] Trace[905260727]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:26.621) (total time: 649ms):
Trace[905260727]: ---"Object stored in database" 649ms (21:57:00.271)
Trace[905260727]: [649.898353ms] [649.898353ms] END
2021-07-06 21:57:27.809693 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:9" took too long (592.703356ms) to execute
2021-07-06 21:57:27.809765 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/default/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/default0\" " with result "range_response_count:0 size:7" took too long (1.184999226s) to execute
2021-07-06 21:57:27.809793 W | etcdserver: read-only range request "key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true " with result "range_response_count:0 size:9" took too long (734.868527ms) to execute
2021-07-06 21:57:27.809854 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:496" took too long (1.185154968s) to execute
I0706 21:57:27.810328 31 trace.go:205] Trace[1790760849]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:26.624) (total time: 1185ms):
Trace[1790760849]: ---"About to write a response" 1185ms (21:57:00.810)
Trace[1790760849]: [1.185961359s] [1.185961359s] END
I0706 21:57:27.810457 31 trace.go:205] Trace[997264329]: "List etcd3" key:/fleet.cattle.io/bundledeployments/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:26.624) (total time: 1185ms):
Trace[997264329]: [1.185954044s] [1.185954044s] END
I0706 21:57:27.810617 31 trace.go:205] Trace[2138559626]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/default/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:26.624) (total time: 1186ms):
Trace[2138559626]: ---"Listing from storage done" 1186ms (21:57:00.810)
Trace[2138559626]: [1.186151709s] [1.186151709s] END
I0706 21:57:27.814067 31 trace.go:205] Trace[1098542618]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (06-Jul-2021 21:57:27.271) (total time: 542ms):
Trace[1098542618]: ---"Transaction committed" 542ms (21:57:00.813)
Trace[1098542618]: [542.675023ms] [542.675023ms] END
I0706 21:57:27.814180 31 trace.go:205] Trace[456078593]: "Update" url:/api/v1/namespaces/kube-system/configmaps/rancher-controller-lock,user-agent:rancher-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.10 (06-Jul-2021 21:57:27.271) (total time: 542ms):
Trace[456078593]: ---"Object stored in database" 542ms (21:57:00.814)
Trace[456078593]: [542.994498ms] [542.994498ms] END
2021-07-06 21:57:28.318149 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:498" took too long (1.043093741s) to execute
2021-07-06 21:57:28.318210 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:478" took too long (1.04321445s) to execute
2021-07-06 21:57:28.318259 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/projectalertrules/\" range_end:\"/registry/management.cattle.io/projectalertrules0\" count_only:true " with result "range_response_count:0 size:9" took too long (653.951179ms) to execute
2021-07-06 21:57:28.318279 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/gitjob\" " with result "range_response_count:1 size:529" took too long (752.798282ms) to execute
2021-07-06 21:57:28.318406 W | etcdserver: request "header:<ID:7587855627833333093 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:4779213 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:416 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >>" with result "size:20" took too long (143.658366ms) to execute
I0706 21:57:28.318880 31 trace.go:205] Trace[566774293]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/gitjob,user-agent:gitjob/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.13 (06-Jul-2021 21:57:27.565) (total time: 753ms):
Trace[566774293]: ---"About to write a response" 753ms (21:57:00.318)
Trace[566774293]: [753.690437ms] [753.690437ms] END
I0706 21:57:28.318915 31 trace.go:205] Trace[1144667558]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Jul-2021 21:57:27.816) (total time: 502ms):
Trace[1144667558]: ---"Transaction committed" 501ms (21:57:00.318)
Trace[1144667558]: [502.127196ms] [502.127196ms] END
I0706 21:57:28.319052 31 trace.go:205] Trace[1767621777]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:27.816) (total time: 502ms):
Trace[1767621777]: ---"Object stored in database" 502ms (21:57:00.318)
Trace[1767621777]: [502.37163ms] [502.37163ms] END
I0706 21:57:28.319781 31 trace.go:205] Trace[1353826087]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:27.274) (total time: 1045ms):
Trace[1353826087]: ---"About to write a response" 1045ms (21:57:00.319)
Trace[1353826087]: [1.045347192s] [1.045347192s] END
I0706 21:57:28.319858 31 trace.go:205] Trace[1636055698]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b/leader-election,client:127.0.0.1 (06-Jul-2021 21:57:27.274) (total time: 1045ms):
Trace[1636055698]: ---"About to write a response" 1045ms (21:57:00.319)
Trace[1636055698]: [1.04512728s] [1.04512728s] END
2021-07-06 21:57:28.804566 W | etcdserver: read-only range request "key:\"/registry/fleet.cattle.io/bundledeployments/dog/\" range_end:\"/registry/fleet.cattle.io/bundledeployments/dog0\" " with result "range_response_count:0 size:7" took too long (989.165297ms) to execute
2021-07-06 21:57:28.804721 W | etcdserver: read-only range request "key:\"/registry/configmaps/fleet-system/fleet-agent-lock\" " with result "range_response_count:1 size:556" took too long (770.797654ms) to execute
2021-07-06 21:57:28.804904 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:507" took too long (771.018523ms) to execute
2021-07-06 21:57:28.805005 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:979" took too long (822.282638ms) to execute
2021-07-06 21:57:28.805249 W | etcdserver: request "header:<ID:7587855627833333095 > txn:<compare:<target:MOD key:\"/registry/configmaps/fleet-system/gitjob\" mod_revision:4779218 > success:<request_put:<key:\"/registry/configmaps/fleet-system/gitjob\" value_size:460 >> failure:<request_range:<key:\"/registry/configmaps/fleet-system/gitjob\" > >>" with result "size:20" took too long (143.083033ms) to execute
I0706 21:57:28.805509 31 trace.go:205] Trace[946942170]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:27.982) (total time: 823ms):
Trace[946942170]: ---"About to write a response" 823ms (21:57:00.805)
Trace[946942170]: [823.085512ms] [823.085512ms] END
I0706 21:57:28.806741 31 trace.go:205] Trace[1008522111]: "Get" url:/api/v1/namespaces/fleet-system/configmaps/fleet-agent-lock,user-agent:fleetagent/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.14 (06-Jul-2021 21:57:28.032) (total time: 774ms):
Trace[1008522111]: ---"About to write a response" 774ms (21:57:00.806)
Trace[1008522111]: [774.253571ms] [774.253571ms] END
I0706 21:57:28.807055 31 trace.go:205] Trace[1475009761]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.19.8+k3s1 (linux/amd64) kubernetes/95fc76b,client:127.0.0.1 (06-Jul-2021 21:57:28.033) (total time: 773ms):
Trace[1475009761]: ---"About to write a response" 773ms (21:57:00.806)
Trace[1475009761]: [773.435874ms] [773.435874ms] END
I0706 21:57:28.808836 31 trace.go:205] Trace[1256903153]: "List etcd3" key:/fleet.cattle.io/bundledeployments/dog,resourceVersion:,resourceVersionMatch:,limit:0,continue: (06-Jul-2021 21:57:27.813) (total time: 995ms):
Trace[1256903153]: [995.615821ms] [995.615821ms] END
I0706 21:57:28.808984 31 trace.go:205] Trace[622776768]: "List" url:/apis/fleet.cattle.io/v1alpha1/namespaces/dog/bundledeployments,user-agent:fleetcontroller/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.42.0.12 (06-Jul-2021 21:57:27.813) (total time: 995ms):
Trace[622776768]: ---"Listing from storage done" 995ms (21:57:00.808)
Trace[622776768]: [995.804591ms] [995.804591ms] END
[ameyer@docker02 ~]$
The mgmt container came back up finally.
Not sure what kind of machines you are using, but given the log (which does not seem to include the moment it stops/crashes) the only guess is that the machine you are using does not have enough resources to run this properly (CPU/memory/disk IOPS). Please share the specs of the machine running Rancher and the machines you are adding to the cluster, if you are using cloud instances, please specify which cloud and what instance you are using.
The log from the cattle-cluster-agent
pod will show why it can’t be run successfully (which indeed causes the UI to show the 500 error).
Seems to be resolved. I added an additional cattle-cluster-argent and added another worker node. So far everything is green except for the metrics-server. Still very green/new to this so thanks for being patient with me. I’ll raise a new thread for that issue.
Still having the issue with the cluster explorer though.
This cluster is currently Provisioning.
Error while applying agent YAML, it will be retried automatically: exit status 1, clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged namespace/cattle-system unchanged serviceaccount/cattle unchanged clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged secret/cattle-credentials-4cbf9c3 unchanged clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply daemonset.apps/cattle-node-agent unchanged daemonset.apps/kube-api-auth unchanged The Deployment "cattle-cluster-agent" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cattle-cluster-agent", "workload.user.cattle.io/workloadselector":"deployment-cattle-system-cattle-cluster-agent"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
This is not resolved if not all pods are running. Also, there is no need to add an “additional cattle-cluster-agent” as that needs to happen automatically. So something in the cluster is not functioning correctly. Without providing the taken steps or the requested info, this is becoming a guessing game which won’t result in a solution quickly. The info is requested to get to the root cause of the issues you are seeing but without providing it and trying to solve it yourself (which is good) but without providing what you have done exactly does not really help in a debug/root cause situation.
In the end, the sequence of creating a cluster and adding nodes with at least all roles covered should be enough to get a working cluster. Anything else is a bug and can be caused by your environment or by Rancher, but to diagnose that, we need the requested info and steps taken in case you are trying things yourself.
Sorry. I re-deployed the cattle-cluster-agent and then deployed a second one. Will try to get docker logs compressed so I can attach them here.