InvalidImageName on rke cluster adding in Rancher

Hi,
I built a cluster like this with RKE (K8s and Rancher noob here) using this doc. My cluster.yml looks like this

---
nodes:
  - address: 'ect-dev-rke0001.mydomain.fr'
    user: k8s-manager
    role: [controlplane,etcd,worker]
  - address: 'ect-dev-rke0002.mydomain.fr'
    user: k8s-manager
    role: [controlplane,etcd,worker]
  - address: 'ect-dev-rke0003.mydomain.fr'
    user: k8s-manager
    role: [controlplane,etcd,worker]
  - address: 'ect-dev-rke0004.mydomain.fr'
    user: k8s-manager
    role: [worker]
  - address: 'ect-dev-rke0005.mydomain.fr'
    user: k8s-manager
    role: [worker]
  - address: 'ect-dev-rke0006.mydomain.fr'
    user: k8s-manager
    role: [worker]
  - address: 'ect-dev-rke0007.mydomain.fr'
    user: k8s-manager
    role: [worker]
  - address: 'ect-dev-rke0008.mydomain.fr'
    user: k8s-manager
    role: [worker]
  - address: 'ect-dev-rke0009.mydomain.fr'
    user: k8s-manager
    role: [worker]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

# Required for external TLS termination with ingress-nginx v0.22+
ingress:
  provider: nginx
  options:
    use-forwarded-headers: 'true'

Then I run rke up, then

kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-67cf578fc4-2r9bl     1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-67tgv            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-c9cr4            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-dfsvb            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-fxknk            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-gzrq2            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-vrvpd            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-xm2ts            1/1     Running     0          51s
ingress-nginx   nginx-ingress-controller-zgfbj            1/1     Running     0          51s
kube-system     canal-9bzrn                               2/2     Running     0          65s
kube-system     canal-9flhx                               2/2     Running     0          65s
kube-system     canal-kxcff                               2/2     Running     0          65s
kube-system     canal-lkq29                               2/2     Running     0          65s
kube-system     canal-mnwb6                               2/2     Running     0          65s
kube-system     canal-ppm4h                               2/2     Running     0          65s
kube-system     canal-tdshb                               2/2     Running     0          65s
kube-system     canal-whdxz                               2/2     Running     0          65s
kube-system     coredns-7c5566588d-8w4sx                  1/1     Running     0          54s
kube-system     coredns-7c5566588d-bp9ql                  1/1     Running     0          61s
kube-system     coredns-autoscaler-65bfc8d47d-qt52x       1/1     Running     0          60s
kube-system     metrics-server-6b55c64f86-88f9g           1/1     Running     0          56s
kube-system     rke-coredns-addon-deploy-job-f87kx        0/1     Completed   0          62s
kube-system     rke-ingress-controller-deploy-job-kcv58   0/1     Completed   0          52s
kube-system     rke-metrics-addon-deploy-job-4zfzq        0/1     Completed   0          57s
kube-system     rke-network-plugin-deploy-job-48sr4       0/1     Completed   0          68s

So everything looks fine, then I add the cluster in Rancher web UI, it works but the UI says the cluster is pending, so when I run this command on my cluster (rancher based certificate)

curl --insecure -sfL https://ect-dev-ngi0001.educonnect.in.cloe.education.gouv.fr/v3/import/7whdpv96r65nm2p8d29p7td7vxhzrb6tms7qm8ds8z6v2g49l26vvc.yaml | kubectl apply -f -

Everything looks fine, no warning, but when I check the pods I get a strange InvalidImageName status.

kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS             RESTARTS   AGE
cattle-system   cattle-cluster-agent-548b8bb59d-5ql7n     0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-bsgdb                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-cmtpg                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-nhq8m                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-nkwr6                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-rscpr                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-spqdz                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-tdvnj                   0/1     InvalidImageName   0          33s
cattle-system   cattle-node-agent-vd98k                   0/1     InvalidImageName   0          33s
ingress-nginx   default-http-backend-67cf578fc4-2r9bl     1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-67tgv            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-c9cr4            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-dfsvb            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-fxknk            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-gzrq2            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-vrvpd            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-xm2ts            1/1     Running            0          3m19s
ingress-nginx   nginx-ingress-controller-zgfbj            1/1     Running            0          3m19s
kube-system     canal-9bzrn                               2/2     Running            0          3m33s
kube-system     canal-9flhx                               2/2     Running            0          3m33s
kube-system     canal-kxcff                               2/2     Running            0          3m33s
kube-system     canal-lkq29                               2/2     Running            0          3m33s
kube-system     canal-mnwb6                               2/2     Running            0          3m33s
kube-system     canal-ppm4h                               2/2     Running            0          3m33s
kube-system     canal-tdshb                               2/2     Running            0          3m33s
kube-system     canal-whdxz                               2/2     Running            0          3m33s
kube-system     coredns-7c5566588d-8w4sx                  1/1     Running            0          3m22s
kube-system     coredns-7c5566588d-bp9ql                  1/1     Running            0          3m29s
kube-system     coredns-autoscaler-65bfc8d47d-qt52x       1/1     Running            0          3m28s
kube-system     metrics-server-6b55c64f86-88f9g           1/1     Running            0          3m24s
kube-system     rke-coredns-addon-deploy-job-f87kx        0/1     Completed          0          3m30s
kube-system     rke-ingress-controller-deploy-job-kcv58   0/1     Completed          0          3m20s
kube-system     rke-metrics-addon-deploy-job-4zfzq        0/1     Completed          0          3m25s
kube-system     rke-network-plugin-deploy-job-48sr4       0/1     Completed          0          3m36s

Could it be related to this old fixed bug?
It’s Rancher latest version on RHEL7 VMWare nodes.
Could it be related to the private registry?