Cluster Registration Command

Greeting, I am very new to Rancher and very new to kubenetes. I have managed to create a cluster successfully, I learnt about Rancher so then went about installing and running it.

After installation I successfully imported the cluster that I built, I then tried to add monitoring but kept getting and error saying that there was a conflict with a local version of monitor on the cluster. So I went about deleting what I could to resolve the issues. I ended up deleting something that I should not have so I went about importing the cluster again but this time I ran into the following problem, I have no clue how to resolve it.

I use the following command to register my cluster in Rancher.
This runs successfully only on the last line I see an error.
Below is the extract from what I see.

root@masternode:~# curl --insecure -sfL https://192.168.1.100:8443/v3/import/q9xdgttjpgjf4nj77khvwstldxxb5ppjqqcbrjq47mbh9r6xk5z9nk_c-pgv6q.yaml | kubectl apply -f -

The Deployment “cattle-cluster-agent” is invalid: spec.template.spec.containers[0].env[7].name: Required value

All I have read about the error, seems to point to an internal Rancher bug, but I am not certian about this nor how to go about fixing it.

Can any one please help me.
Lawrence

1 Like

Please share versions used and the YAML that is being returned from the curl request.

@superseb
Thank you for responding. I am very sorry but I am very new to this whole environment. I am doing and learning very much without really understanding what it is that I am actually doing.
As I said earlier, I did have this working in Rancher and when trying to find what is conflicting with monitoring, I managed to delete something that broke rancher.
I have now managed to recover this. What I did was use the same Cluster names as the one that I deleted. So right now I am back where I started.
When I look in my browser window at my Rancher setup. I choose the correlating cluster and then on the left hand side select Monitoring Overview.
image

I see 5 dashboard panels. All of these panels show as unavailable.
I am not sure who I can ask to point me in the right direction to resolve this.
I am very sorry if this is not the correct place and I hope that you can direct me.
My original issue is resolved.
Thanks for the follow up
Lawrence

1 Like

Thanks for sharing the version.

I know the thread is already pretty old but I got the same error while trying to import an existing RKE2 (v1.24.9+rke2r1) cluster into Rancher (v2.7) which I deployed on a docker installation.

The Deployment “cattle-cluster-agent” is invalid: spec.template.spec.containers[0].env[10].name: Required value

yaml that I get for the import command includes something like

      containers:
        - name: cluster-register
          imagePullPolicy: IfNotPresent
          env:
          - name: CATTLE_IS_RKE
            value: "false"
          - name: CATTLE_SERVER
            value: "myurl"
          - name: CATTLE_CA_CHECKSUM
            value: "d1b311677bad66bd2d88dc7ed6f4421a2a6b472bbd0e1fda1a282efedd2cc5b0"
          - name: CATTLE_CLUSTER
            value: "true"
          - name: CATTLE_K8S_MANAGED
            value: "true"
          - name: CATTLE_CLUSTER_REGISTRY
            value: ""
          - name: CATTLE_SERVER_VERSION
            value: v2.7.0
          - name: CATTLE_INSTALL_UUID
            value: c04aecf6-460e-4922-ba2d-1472f1008296
          - name: CATTLE_INGRESS_IP_DOMAIN
            value: sslip.io
          **- name: ""**
          image: rancher/rancher-agent:v2.7.0
          volumeMounts:
          - name: cattle-credentials
            mountPath: /cattle-credentials
            readOnly: true

and yepp there is an empty value - I directed the url to a file removed the empty name and aplied it. So far so good. Command accepted

root@lx-k8s-thor-01:~# cat add2rancher.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-d4abed7 unchanged
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
service/cattle-cluster-agent unchanged

but in rancher I still get

any Idea how to sort this out. Obviously this was first contact with rancher :frowning:

Note: the original yaml was produced by rancher and offer via the import command wizzard
curl --insecure -sfL https://myhost:8443/v3/import/xyz.yaml | kubectl apply -f -

Another reply/update: I delete the cluster from rancher and started again an import. Now the yaml file was produced correctly and the cluster is now also imported into rancher.

Not sure what is causing the issue maybe this happens only for the very first import?

its support both and pull modes to manage the member clusters which means the main difference between pull and push modes is the way to member cluster when deploying menifests.

Furthermore,

on cluster, page click on add cluster
under the registration, click on the kubernetes cluster where you want register.
enter your cluster name
use member role to configure authorization for cluster,
click on creat for rancher v2