Issues with local cluster in Rancher v2.5.7

Hi community,

First of all, I’m sorry if it’s not the right place I write a message for help.

To begin with, I have successfully deployed Rancher single node in the docker container. In the UI for now there is only one cluster: local (Provider: K3S). However, if I check its workloads in the namespace “cattle-system” there is a lot of failed pods “helm-operation-xxxxx”. As I understood, it is trying to pull an image “rancher/shell:v0.1.6”, but my VM doesn’t have direct access to the internet. I’m using Artifactory, but not quite sure how to integrate it with Rancher. Some of the logs from the Rancher container:

E0220 13:04:57.268766     26 remote_image.go:113] PullImage "rancher/shell:v0.1.6" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/shell:v0.1.6": failed to resolve reference "docker.io/rancher/shell:v0.1.6": failed to do request: Head https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6: dial tcp: lookup registry-1.docker.io: no such host
E0220 13:04:57.268808     26 kuberuntime_image.go:50] Pull image "rancher/shell:v0.1.6" failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/shell:v0.1.6": failed to resolve reference "docker.io/rancher/shell:v0.1.6": failed to do request: Head https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6: dial tcp: lookup registry-1.docker.io: no such host
E0220 13:04:57.268865     26 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/rancher/shell:v0.1.6": failed to resolve reference "docker.io/rancher/shell:v0.1.6": failed to do request: Head https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6: dial tcp: lookup registry-1.docker.io: no such host
E0220 13:04:57.269670     26 pod_workers.go:191] Error syncing pod f7002f13-3f4a-477d-b38e-b31a86854c8f ("helm-operation-d6b4f_cattle-system(f7002f13-3f4a-477d-b38e-b31a86854c8f)"), skipping: [failed to "StartContainer" for "helm" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/rancher/shell:v0.1.6\": failed to resolve reference \"docker.io/rancher/shell:v0.1.6\": failed to do request: Head https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6: dial tcp: lookup registry-1.docker.io: no such host", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:00.264620     26 pod_workers.go:191] Error syncing pod 7f461430-9fa9-40a0-9292-b232999d4c68 ("helm-operation-csqtv_cattle-system(7f461430-9fa9-40a0-9292-b232999d4c68)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:01.263203     26 pod_workers.go:191] Error syncing pod b36eedc8-bfaa-40fe-a347-7afe191c4806 ("helm-operation-zpkh5_cattle-system(b36eedc8-bfaa-40fe-a347-7afe191c4806)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
W0220 13:05:01.348877     26 iptables.go:550] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
E0220 13:05:02.263542     26 pod_workers.go:191] Error syncing pod cfa1b9ea-6f54-4486-b72c-516d85758097 ("helm-operation-qlplj_cattle-system(cfa1b9ea-6f54-4486-b72c-516d85758097)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:05.262470     26 pod_workers.go:191] Error syncing pod 5c049d76-8ee5-42d6-b64d-da0715a58253 ("helm-operation-s9w8x_cattle-system(5c049d76-8ee5-42d6-b64d-da0715a58253)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:07.264113     26 pod_workers.go:191] Error syncing pod ea230264-a67c-4829-a04f-e1cca0a26858 ("helm-operation-qdcft_cattle-system(ea230264-a67c-4829-a04f-e1cca0a26858)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:07.264202     26 pod_workers.go:191] Error syncing pod e3db70fb-dc2e-4b8e-b70c-58c77fce33c3 ("helm-operation-6dx7j_cattle-system(e3db70fb-dc2e-4b8e-b70c-58c77fce33c3)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:09.262982     26 pod_workers.go:191] Error syncing pod f7002f13-3f4a-477d-b38e-b31a86854c8f ("helm-operation-d6b4f_cattle-system(f7002f13-3f4a-477d-b38e-b31a86854c8f)"), skipping: [failed to "StartContainer" for "helm" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\"", failed to "StartContainer" for "proxy" with ImagePullBackOff: "Back-off pulling image \"rancher/shell:v0.1.6\""]
E0220 13:05:12.941958     26 proxier.go:833] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 3: iptables v1.8.3 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I0220 13:05:12.941981     26 proxier.go:825] Sync failed; retrying in 30s

If it uses by default https://registry-1.docker.io/v2/rancher/shell/manifests/v0.1.6 how can I change it and add a link to the remote repository in my Artifactory?

Why it’s reporting that something is wrong with iptables?

As well, “kube-system” namesprace “coredns” pod logs:


E0221 11:03:21.572858       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:03:21.573700       1 trace.go:116] Trace[76937649]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:02:51.573324103 +0000 UTC m=+79610.824252103) (total time: 30.000355749s):
Trace[76937649]: [30.000355749s] [30.000355749s] END
E0221 11:03:21.573716       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:03:21.581112       1 trace.go:116] Trace[680769068]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:02:51.580590915 +0000 UTC m=+79610.831518947) (total time: 30.000490836s):
Trace[680769068]: [30.000490836s] [30.000490836s] END
E0221 11:03:21.581140       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get "https://10.43.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0221 11:03:52.575439       1 trace.go:116] Trace[1586546868]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:22.574862069 +0000 UTC m=+79641.825790074) (total time: 30.00054772s):
Trace[1586546868]: [30.00054772s] [30.00054772s] END
I0221 11:03:52.575439       1 trace.go:116] Trace[1991890552]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:22.57511142 +0000 UTC m=+79641.826039389) (total time: 30.000300219s):
Trace[1991890552]: [30.000300219s] [30.000300219s] END
E0221 11:03:52.575486       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
E0221 11:03:52.575497       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:03:52.581878       1 trace.go:116] Trace[1946538740]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:22.581369385 +0000 UTC m=+79641.832297418) (total time: 30.000473813s):
Trace[1946538740]: [30.000473813s] [30.000473813s] END
E0221 11:03:52.581908       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get "https://10.43.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0221 11:04:23.576573       1 trace.go:116] Trace[363778883]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:53.575644528 +0000 UTC m=+79672.826572541) (total time: 30.000887785s):
Trace[363778883]: [30.000887785s] [30.000887785s] END
E0221 11:04:23.576603       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:04:23.577242       1 trace.go:116] Trace[1870827804]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:53.57672832 +0000 UTC m=+79672.827656293) (total time: 30.000494119s):
Trace[1870827804]: [30.000494119s] [30.000494119s] END
E0221 11:04:23.577263       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:04:23.582675       1 trace.go:116] Trace[1741598137]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:03:53.582104632 +0000 UTC m=+79672.833032632) (total time: 30.00054383s):
Trace[1741598137]: [30.00054383s] [30.00054383s] END
E0221 11:04:23.582701       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get "https://10.43.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0221 11:04:54.577257       1 trace.go:116] Trace[1394930608]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:04:24.57678791 +0000 UTC m=+79703.827715879) (total time: 30.00042909s):
Trace[1394930608]: [30.00042909s] [30.00042909s] END
E0221 11:04:54.577285       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Service: Get "https://10.43.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:04:54.578290       1 trace.go:116] Trace[351286758]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:04:24.57786297 +0000 UTC m=+79703.828790998) (total time: 30.000402044s):
Trace[351286758]: [30.000402044s] [30.000402044s] END
E0221 11:04:54.578332       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get "https://10.43.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
I0221 11:04:54.583327       1 trace.go:116] Trace[415384518]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105 (started: 2023-02-21 11:04:24.582873789 +0000 UTC m=+79703.833801798) (total time: 30.000414296s):
Trace[415384518]: [30.000414296s] [30.000414296s] END
E0221 11:04:54.583347       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.4/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get "https://10.43.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.43.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

Local cluster workloads:

Thanks in advance,